Test Report: Docker_Linux_crio 21753

                    
                      37d7943b58d61ad05591f3a5d0091cda14132c69:2025-10-17:41947
                    
                

Test fail (37/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.25
35 TestAddons/parallel/Registry 13.17
36 TestAddons/parallel/RegistryCreds 0.43
37 TestAddons/parallel/Ingress 149.66
38 TestAddons/parallel/InspektorGadget 5.25
39 TestAddons/parallel/MetricsServer 5.34
41 TestAddons/parallel/CSI 35.19
42 TestAddons/parallel/Headlamp 2.61
43 TestAddons/parallel/CloudSpanner 5.25
44 TestAddons/parallel/LocalPath 10.16
45 TestAddons/parallel/NvidiaDevicePlugin 6.26
46 TestAddons/parallel/Yakd 5.27
47 TestAddons/parallel/AmdGpuDevicePlugin 6.26
98 TestFunctional/parallel/ServiceCmdConnect 603.02
115 TestFunctional/parallel/ServiceCmd/DeployApp 600.66
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.93
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.92
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.4
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.22
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.86
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
154 TestFunctional/parallel/ServiceCmd/Format 0.55
155 TestFunctional/parallel/ServiceCmd/URL 0.55
191 TestJSONOutput/pause/Command 1.62
197 TestJSONOutput/unpause/Command 2.31
294 TestPause/serial/Pause 5.55
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.4
305 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.44
310 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.11
317 TestStartStop/group/old-k8s-version/serial/Pause 5.99
323 TestStartStop/group/no-preload/serial/Pause 6.37
330 TestStartStop/group/embed-certs/serial/Pause 6.86
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.76
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.6
345 TestStartStop/group/newest-cni/serial/Pause 6.14
357 TestStartStop/group/default-k8s-diff-port/serial/Pause 8.94
x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-642189 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-642189 addons disable volcano --alsologtostderr -v=1: exit status 11 (253.042658ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 18:59:09.152411  505477 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:59:09.152749  505477 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:09.152760  505477 out.go:374] Setting ErrFile to fd 2...
	I1017 18:59:09.152767  505477 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:09.152963  505477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 18:59:09.153264  505477 mustload.go:65] Loading cluster: addons-642189
	I1017 18:59:09.153665  505477 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:09.153700  505477 addons.go:606] checking whether the cluster is paused
	I1017 18:59:09.153802  505477 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:09.153821  505477 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:59:09.154236  505477 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:59:09.173938  505477 ssh_runner.go:195] Run: systemctl --version
	I1017 18:59:09.174009  505477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:59:09.192597  505477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:59:09.288931  505477 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 18:59:09.289016  505477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 18:59:09.322457  505477 cri.go:89] found id: "621b748d538846f79bca883df087ce87a58a6e5cc5dbbb8f2ae4845785e122d6"
	I1017 18:59:09.322479  505477 cri.go:89] found id: "317712e1d5627e3d52413fcacd6f0a3e40e74b682567b7117acb2ddbf4da2a72"
	I1017 18:59:09.322482  505477 cri.go:89] found id: "c8951bd4e7631f9e6fa9ad944251500dd44cda63d891f7b553931aa3ef22e7e7"
	I1017 18:59:09.322485  505477 cri.go:89] found id: "6073132bac88bd54a1f9014aa1b74b68de0ac557ac483e5c0a7ff51ac939a2dd"
	I1017 18:59:09.322487  505477 cri.go:89] found id: "d3882b8636526fe7f302d3351b33fc68a8df5109463693af6642869704a2b6a2"
	I1017 18:59:09.322491  505477 cri.go:89] found id: "99fe19979e6f782cb4ff1df09e72c9c58535540daf8c28b63b4a3f1719cfa365"
	I1017 18:59:09.322493  505477 cri.go:89] found id: "600ce5e0b6a8556aa7c055afc13692cb00b2d6f0a82ba6d0817e5e424b49881c"
	I1017 18:59:09.322495  505477 cri.go:89] found id: "214596c066d6ebce81c069cda2c2790ee022d3770221e7e183390decc49e626b"
	I1017 18:59:09.322498  505477 cri.go:89] found id: "cd49fb8b1ee5ca17b886edf352059ec13c3aa8fed46c8383c6660656f0403d67"
	I1017 18:59:09.322510  505477 cri.go:89] found id: "b3f4b36a5cb43ddf3de225e5f08ad4b2d165ae6234c165082ae1316a48f48425"
	I1017 18:59:09.322512  505477 cri.go:89] found id: "fc47a341f594c2a4203992ac73a7c89fb4722c54e399f0949c3604dfa81f70ef"
	I1017 18:59:09.322515  505477 cri.go:89] found id: "d3140eef7e893c98db6a57843c26dd0767733610bfaab45265577ad3a64a334e"
	I1017 18:59:09.322520  505477 cri.go:89] found id: "bce8d27694469a014387080e94f416e3cfb88071ea69506ea8a1d04b16176e43"
	I1017 18:59:09.322524  505477 cri.go:89] found id: "26a77f9d8fd20d481d8ec7b0d85a65954b10af33ae4994293044ad2067b41872"
	I1017 18:59:09.322528  505477 cri.go:89] found id: "afa9f6b0496818adf412ab2c3cf979e86f2593796860b7b9e53c8fd85f0fe586"
	I1017 18:59:09.322546  505477 cri.go:89] found id: "ea8c7aa6a69f9b8476c9c28a6ac0944597fdf78921727f3c142c41a2b6a9bb00"
	I1017 18:59:09.322558  505477 cri.go:89] found id: "05b0d75fa7e337102c5b778d87f16ae508e704efb9367ba5a98cc93f0460d03c"
	I1017 18:59:09.322564  505477 cri.go:89] found id: "c8959e94a4c121db6d2c59fccf2f1725ca1521aca59330c8262847404ff4a854"
	I1017 18:59:09.322566  505477 cri.go:89] found id: "d6a7317aabf4df8eb271b0bf784be0c6045d3ed3d186ebfc5869cb018026ecfd"
	I1017 18:59:09.322569  505477 cri.go:89] found id: "49aea2d7818a2bf7202542ba97e6f5d99fd7c496045a1c05fbd5332046a05e6f"
	I1017 18:59:09.322571  505477 cri.go:89] found id: "43e40655463cffe530b5aa16eb8ff13e3891f57f9034e26ef39cd927af2c8e4a"
	I1017 18:59:09.322574  505477 cri.go:89] found id: "8b60fdbdcbbd68792cea1184b624381c87a1f1eed5a416aa91d0007baad72c0d"
	I1017 18:59:09.322576  505477 cri.go:89] found id: "44a3d62e9e439ad0c55eef8ceec2ced7e9b2897150b415717801bf2686765caa"
	I1017 18:59:09.322578  505477 cri.go:89] found id: "a76bbc48e30da642f43c612cdc6a0a786d2a6d1c4942a22be68e5c4a9a6f40f9"
	I1017 18:59:09.322580  505477 cri.go:89] found id: ""
	I1017 18:59:09.322621  505477 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 18:59:09.338090  505477 out.go:203] 
	W1017 18:59:09.339318  505477 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 18:59:09.339349  505477 out.go:285] * 
	* 
	W1017 18:59:09.343532  505477 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 18:59:09.345083  505477 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-642189 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.742128ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-gfg4q" [f3780320-4513-4f0c-a613-2e6dae9f1050] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003258425s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-7wchq" [ba24cd6f-ac09-4d7a-8504-fc72367cd2c3] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003739837s
addons_test.go:392: (dbg) Run:  kubectl --context addons-642189 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-642189 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-642189 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.681913855s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-642189 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-642189 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-642189 addons disable registry --alsologtostderr -v=1: exit status 11 (259.273495ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 18:59:31.131462  507324 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:59:31.131758  507324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:31.131772  507324 out.go:374] Setting ErrFile to fd 2...
	I1017 18:59:31.131780  507324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:31.131993  507324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 18:59:31.132312  507324 mustload.go:65] Loading cluster: addons-642189
	I1017 18:59:31.132739  507324 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:31.132762  507324 addons.go:606] checking whether the cluster is paused
	I1017 18:59:31.132908  507324 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:31.132926  507324 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:59:31.133360  507324 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:59:31.154848  507324 ssh_runner.go:195] Run: systemctl --version
	I1017 18:59:31.154915  507324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:59:31.179600  507324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:59:31.277548  507324 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 18:59:31.277630  507324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 18:59:31.307233  507324 cri.go:89] found id: "621b748d538846f79bca883df087ce87a58a6e5cc5dbbb8f2ae4845785e122d6"
	I1017 18:59:31.307271  507324 cri.go:89] found id: "317712e1d5627e3d52413fcacd6f0a3e40e74b682567b7117acb2ddbf4da2a72"
	I1017 18:59:31.307277  507324 cri.go:89] found id: "c8951bd4e7631f9e6fa9ad944251500dd44cda63d891f7b553931aa3ef22e7e7"
	I1017 18:59:31.307285  507324 cri.go:89] found id: "6073132bac88bd54a1f9014aa1b74b68de0ac557ac483e5c0a7ff51ac939a2dd"
	I1017 18:59:31.307288  507324 cri.go:89] found id: "d3882b8636526fe7f302d3351b33fc68a8df5109463693af6642869704a2b6a2"
	I1017 18:59:31.307291  507324 cri.go:89] found id: "99fe19979e6f782cb4ff1df09e72c9c58535540daf8c28b63b4a3f1719cfa365"
	I1017 18:59:31.307293  507324 cri.go:89] found id: "600ce5e0b6a8556aa7c055afc13692cb00b2d6f0a82ba6d0817e5e424b49881c"
	I1017 18:59:31.307297  507324 cri.go:89] found id: "214596c066d6ebce81c069cda2c2790ee022d3770221e7e183390decc49e626b"
	I1017 18:59:31.307301  507324 cri.go:89] found id: "cd49fb8b1ee5ca17b886edf352059ec13c3aa8fed46c8383c6660656f0403d67"
	I1017 18:59:31.307312  507324 cri.go:89] found id: "b3f4b36a5cb43ddf3de225e5f08ad4b2d165ae6234c165082ae1316a48f48425"
	I1017 18:59:31.307328  507324 cri.go:89] found id: "fc47a341f594c2a4203992ac73a7c89fb4722c54e399f0949c3604dfa81f70ef"
	I1017 18:59:31.307332  507324 cri.go:89] found id: "d3140eef7e893c98db6a57843c26dd0767733610bfaab45265577ad3a64a334e"
	I1017 18:59:31.307339  507324 cri.go:89] found id: "bce8d27694469a014387080e94f416e3cfb88071ea69506ea8a1d04b16176e43"
	I1017 18:59:31.307343  507324 cri.go:89] found id: "26a77f9d8fd20d481d8ec7b0d85a65954b10af33ae4994293044ad2067b41872"
	I1017 18:59:31.307350  507324 cri.go:89] found id: "afa9f6b0496818adf412ab2c3cf979e86f2593796860b7b9e53c8fd85f0fe586"
	I1017 18:59:31.307365  507324 cri.go:89] found id: "ea8c7aa6a69f9b8476c9c28a6ac0944597fdf78921727f3c142c41a2b6a9bb00"
	I1017 18:59:31.307374  507324 cri.go:89] found id: "05b0d75fa7e337102c5b778d87f16ae508e704efb9367ba5a98cc93f0460d03c"
	I1017 18:59:31.307379  507324 cri.go:89] found id: "c8959e94a4c121db6d2c59fccf2f1725ca1521aca59330c8262847404ff4a854"
	I1017 18:59:31.307381  507324 cri.go:89] found id: "d6a7317aabf4df8eb271b0bf784be0c6045d3ed3d186ebfc5869cb018026ecfd"
	I1017 18:59:31.307384  507324 cri.go:89] found id: "49aea2d7818a2bf7202542ba97e6f5d99fd7c496045a1c05fbd5332046a05e6f"
	I1017 18:59:31.307388  507324 cri.go:89] found id: "43e40655463cffe530b5aa16eb8ff13e3891f57f9034e26ef39cd927af2c8e4a"
	I1017 18:59:31.307392  507324 cri.go:89] found id: "8b60fdbdcbbd68792cea1184b624381c87a1f1eed5a416aa91d0007baad72c0d"
	I1017 18:59:31.307399  507324 cri.go:89] found id: "44a3d62e9e439ad0c55eef8ceec2ced7e9b2897150b415717801bf2686765caa"
	I1017 18:59:31.307403  507324 cri.go:89] found id: "a76bbc48e30da642f43c612cdc6a0a786d2a6d1c4942a22be68e5c4a9a6f40f9"
	I1017 18:59:31.307410  507324 cri.go:89] found id: ""
	I1017 18:59:31.307465  507324 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 18:59:31.323498  507324 out.go:203] 
	W1017 18:59:31.325266  507324 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 18:59:31.325289  507324 out.go:285] * 
	* 
	W1017 18:59:31.329364  507324 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 18:59:31.331080  507324 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-642189 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.17s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.43s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.346554ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-642189
addons_test.go:332: (dbg) Run:  kubectl --context addons-642189 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-642189 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-642189 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (261.160713ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 18:59:34.801034  508318 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:59:34.801145  508318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:34.801153  508318 out.go:374] Setting ErrFile to fd 2...
	I1017 18:59:34.801157  508318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:34.801350  508318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 18:59:34.801619  508318 mustload.go:65] Loading cluster: addons-642189
	I1017 18:59:34.802011  508318 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:34.802031  508318 addons.go:606] checking whether the cluster is paused
	I1017 18:59:34.802135  508318 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:34.802154  508318 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:59:34.802563  508318 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:59:34.822202  508318 ssh_runner.go:195] Run: systemctl --version
	I1017 18:59:34.822385  508318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:59:34.844863  508318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:59:34.943133  508318 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 18:59:34.943218  508318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 18:59:34.978456  508318 cri.go:89] found id: "621b748d538846f79bca883df087ce87a58a6e5cc5dbbb8f2ae4845785e122d6"
	I1017 18:59:34.978498  508318 cri.go:89] found id: "317712e1d5627e3d52413fcacd6f0a3e40e74b682567b7117acb2ddbf4da2a72"
	I1017 18:59:34.978504  508318 cri.go:89] found id: "c8951bd4e7631f9e6fa9ad944251500dd44cda63d891f7b553931aa3ef22e7e7"
	I1017 18:59:34.978508  508318 cri.go:89] found id: "6073132bac88bd54a1f9014aa1b74b68de0ac557ac483e5c0a7ff51ac939a2dd"
	I1017 18:59:34.978512  508318 cri.go:89] found id: "d3882b8636526fe7f302d3351b33fc68a8df5109463693af6642869704a2b6a2"
	I1017 18:59:34.978517  508318 cri.go:89] found id: "99fe19979e6f782cb4ff1df09e72c9c58535540daf8c28b63b4a3f1719cfa365"
	I1017 18:59:34.978521  508318 cri.go:89] found id: "600ce5e0b6a8556aa7c055afc13692cb00b2d6f0a82ba6d0817e5e424b49881c"
	I1017 18:59:34.978526  508318 cri.go:89] found id: "214596c066d6ebce81c069cda2c2790ee022d3770221e7e183390decc49e626b"
	I1017 18:59:34.978529  508318 cri.go:89] found id: "cd49fb8b1ee5ca17b886edf352059ec13c3aa8fed46c8383c6660656f0403d67"
	I1017 18:59:34.978543  508318 cri.go:89] found id: "b3f4b36a5cb43ddf3de225e5f08ad4b2d165ae6234c165082ae1316a48f48425"
	I1017 18:59:34.978551  508318 cri.go:89] found id: "fc47a341f594c2a4203992ac73a7c89fb4722c54e399f0949c3604dfa81f70ef"
	I1017 18:59:34.978555  508318 cri.go:89] found id: "d3140eef7e893c98db6a57843c26dd0767733610bfaab45265577ad3a64a334e"
	I1017 18:59:34.978561  508318 cri.go:89] found id: "bce8d27694469a014387080e94f416e3cfb88071ea69506ea8a1d04b16176e43"
	I1017 18:59:34.978565  508318 cri.go:89] found id: "26a77f9d8fd20d481d8ec7b0d85a65954b10af33ae4994293044ad2067b41872"
	I1017 18:59:34.978569  508318 cri.go:89] found id: "afa9f6b0496818adf412ab2c3cf979e86f2593796860b7b9e53c8fd85f0fe586"
	I1017 18:59:34.978594  508318 cri.go:89] found id: "ea8c7aa6a69f9b8476c9c28a6ac0944597fdf78921727f3c142c41a2b6a9bb00"
	I1017 18:59:34.978604  508318 cri.go:89] found id: "05b0d75fa7e337102c5b778d87f16ae508e704efb9367ba5a98cc93f0460d03c"
	I1017 18:59:34.978610  508318 cri.go:89] found id: "c8959e94a4c121db6d2c59fccf2f1725ca1521aca59330c8262847404ff4a854"
	I1017 18:59:34.978613  508318 cri.go:89] found id: "d6a7317aabf4df8eb271b0bf784be0c6045d3ed3d186ebfc5869cb018026ecfd"
	I1017 18:59:34.978617  508318 cri.go:89] found id: "49aea2d7818a2bf7202542ba97e6f5d99fd7c496045a1c05fbd5332046a05e6f"
	I1017 18:59:34.978628  508318 cri.go:89] found id: "43e40655463cffe530b5aa16eb8ff13e3891f57f9034e26ef39cd927af2c8e4a"
	I1017 18:59:34.978632  508318 cri.go:89] found id: "8b60fdbdcbbd68792cea1184b624381c87a1f1eed5a416aa91d0007baad72c0d"
	I1017 18:59:34.978640  508318 cri.go:89] found id: "44a3d62e9e439ad0c55eef8ceec2ced7e9b2897150b415717801bf2686765caa"
	I1017 18:59:34.978645  508318 cri.go:89] found id: "a76bbc48e30da642f43c612cdc6a0a786d2a6d1c4942a22be68e5c4a9a6f40f9"
	I1017 18:59:34.978651  508318 cri.go:89] found id: ""
	I1017 18:59:34.978728  508318 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 18:59:34.994838  508318 out.go:203] 
	W1017 18:59:34.996230  508318 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 18:59:34.996272  508318 out.go:285] * 
	* 
	W1017 18:59:35.000765  508318 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 18:59:35.002280  508318 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-642189 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.43s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (149.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-642189 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-642189 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-642189 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [644f38c5-6649-46f8-bf05-b4d2b264ded8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [644f38c5-6649-46f8-bf05-b4d2b264ded8] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003893907s
I1017 18:59:41.807468  495725 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-642189 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-642189 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.975703434s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-642189 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-642189 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-642189
helpers_test.go:243: (dbg) docker inspect addons-642189:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "810df9073b89c74eff4799d0df8a6ca8a8bd99720281790a1fc39583f9548eb3",
	        "Created": "2025-10-17T18:56:47.345619046Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 497687,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T18:56:47.386313203Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/810df9073b89c74eff4799d0df8a6ca8a8bd99720281790a1fc39583f9548eb3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/810df9073b89c74eff4799d0df8a6ca8a8bd99720281790a1fc39583f9548eb3/hostname",
	        "HostsPath": "/var/lib/docker/containers/810df9073b89c74eff4799d0df8a6ca8a8bd99720281790a1fc39583f9548eb3/hosts",
	        "LogPath": "/var/lib/docker/containers/810df9073b89c74eff4799d0df8a6ca8a8bd99720281790a1fc39583f9548eb3/810df9073b89c74eff4799d0df8a6ca8a8bd99720281790a1fc39583f9548eb3-json.log",
	        "Name": "/addons-642189",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-642189:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-642189",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "810df9073b89c74eff4799d0df8a6ca8a8bd99720281790a1fc39583f9548eb3",
	                "LowerDir": "/var/lib/docker/overlay2/744567c26d3445f0286a6368c84803ddd87746d653da866f782f5056f17193d9-init/diff:/var/lib/docker/overlay2/dbfb6a42e05d15debefb7c829b0dbabbe558b70da40f1ab4f30d27e0dda96088/diff",
	                "MergedDir": "/var/lib/docker/overlay2/744567c26d3445f0286a6368c84803ddd87746d653da866f782f5056f17193d9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/744567c26d3445f0286a6368c84803ddd87746d653da866f782f5056f17193d9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/744567c26d3445f0286a6368c84803ddd87746d653da866f782f5056f17193d9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-642189",
	                "Source": "/var/lib/docker/volumes/addons-642189/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-642189",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-642189",
	                "name.minikube.sigs.k8s.io": "addons-642189",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "68f31a3be24d6cd663a3cb3519d845dad847ca6f875fe3ab42e4c3255fba7d5b",
	            "SandboxKey": "/var/run/docker/netns/68f31a3be24d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-642189": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:82:83:4c:53:70",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c18e6eaa32c599bc5ecf999057629d81e48002de288024396da5438376dc6ea7",
	                    "EndpointID": "6b552c996a11764d7fd56d185c5a76c5b24251a546322fbc09de96d261801c13",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-642189",
	                        "810df9073b89"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-642189 -n addons-642189
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-642189 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-642189 logs -n 25: (1.268280455s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-386230 --alsologtostderr --binary-mirror http://127.0.0.1:41417 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-386230 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ delete  │ -p binary-mirror-386230                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-386230 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ addons  │ disable dashboard -p addons-642189                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ addons  │ enable dashboard -p addons-642189                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ start   │ -p addons-642189 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:59 UTC │
	│ addons  │ addons-642189 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-642189 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ addons  │ enable headlamp -p addons-642189 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-642189 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-642189 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-642189 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-642189 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-642189 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ ip      │ addons-642189 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │ 17 Oct 25 18:59 UTC │
	│ addons  │ addons-642189 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-642189 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ ssh     │ addons-642189 ssh cat /opt/local-path-provisioner/pvc-b324b6e5-390e-427c-bd7c-84a9e595ad1f_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │ 17 Oct 25 18:59 UTC │
	│ addons  │ addons-642189 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-642189                                                                                                                                                                                                                                                                                                                                                                                           │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │ 17 Oct 25 18:59 UTC │
	│ addons  │ addons-642189 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-642189 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ ssh     │ addons-642189 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-642189 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 19:00 UTC │                     │
	│ addons  │ addons-642189 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 19:00 UTC │                     │
	│ ip      │ addons-642189 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-642189        │ jenkins │ v1.37.0 │ 17 Oct 25 19:01 UTC │ 17 Oct 25 19:01 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 18:56:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 18:56:23.507351  497052 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:56:23.507656  497052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:23.507668  497052 out.go:374] Setting ErrFile to fd 2...
	I1017 18:56:23.507673  497052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:23.507931  497052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 18:56:23.508553  497052 out.go:368] Setting JSON to false
	I1017 18:56:23.509607  497052 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9522,"bootTime":1760717861,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 18:56:23.509729  497052 start.go:141] virtualization: kvm guest
	I1017 18:56:23.511775  497052 out.go:179] * [addons-642189] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 18:56:23.513138  497052 notify.go:220] Checking for updates...
	I1017 18:56:23.513165  497052 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 18:56:23.514764  497052 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 18:56:23.516385  497052 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 18:56:23.517781  497052 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 18:56:23.518988  497052 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 18:56:23.520177  497052 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 18:56:23.521466  497052 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 18:56:23.544817  497052 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 18:56:23.544957  497052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 18:56:23.607417  497052 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-17 18:56:23.596926247 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 18:56:23.607598  497052 docker.go:318] overlay module found
	I1017 18:56:23.609480  497052 out.go:179] * Using the docker driver based on user configuration
	I1017 18:56:23.610765  497052 start.go:305] selected driver: docker
	I1017 18:56:23.610783  497052 start.go:925] validating driver "docker" against <nil>
	I1017 18:56:23.610796  497052 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 18:56:23.611452  497052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 18:56:23.667517  497052 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-17 18:56:23.65761722 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 18:56:23.667713  497052 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 18:56:23.667915  497052 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 18:56:23.669671  497052 out.go:179] * Using Docker driver with root privileges
	I1017 18:56:23.670744  497052 cni.go:84] Creating CNI manager for ""
	I1017 18:56:23.670804  497052 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 18:56:23.670814  497052 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 18:56:23.670885  497052 start.go:349] cluster config:
	{Name:addons-642189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-642189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1017 18:56:23.672139  497052 out.go:179] * Starting "addons-642189" primary control-plane node in "addons-642189" cluster
	I1017 18:56:23.673339  497052 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 18:56:23.674462  497052 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 18:56:23.675534  497052 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 18:56:23.675571  497052 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 18:56:23.675581  497052 cache.go:58] Caching tarball of preloaded images
	I1017 18:56:23.675655  497052 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 18:56:23.675673  497052 preload.go:233] Found /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 18:56:23.675695  497052 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 18:56:23.676034  497052 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/config.json ...
	I1017 18:56:23.676060  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/config.json: {Name:mkcde08ab33d0282fa7fc0a52d8a6d2246e9d73f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:23.692532  497052 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1017 18:56:23.692675  497052 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1017 18:56:23.692719  497052 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1017 18:56:23.692729  497052 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1017 18:56:23.692740  497052 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1017 18:56:23.692749  497052 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1017 18:56:35.637952  497052 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1017 18:56:35.637996  497052 cache.go:232] Successfully downloaded all kic artifacts
	I1017 18:56:35.638046  497052 start.go:360] acquireMachinesLock for addons-642189: {Name:mk981f556bc62a56e256ed48011138888bf0d350 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 18:56:35.638202  497052 start.go:364] duration metric: took 115.785µs to acquireMachinesLock for "addons-642189"
	I1017 18:56:35.638238  497052 start.go:93] Provisioning new machine with config: &{Name:addons-642189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-642189 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 18:56:35.638320  497052 start.go:125] createHost starting for "" (driver="docker")
	I1017 18:56:35.640279  497052 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1017 18:56:35.640566  497052 start.go:159] libmachine.API.Create for "addons-642189" (driver="docker")
	I1017 18:56:35.640608  497052 client.go:168] LocalClient.Create starting
	I1017 18:56:35.640790  497052 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem
	I1017 18:56:35.758344  497052 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem
	I1017 18:56:36.285762  497052 cli_runner.go:164] Run: docker network inspect addons-642189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 18:56:36.303852  497052 cli_runner.go:211] docker network inspect addons-642189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 18:56:36.303932  497052 network_create.go:284] running [docker network inspect addons-642189] to gather additional debugging logs...
	I1017 18:56:36.304036  497052 cli_runner.go:164] Run: docker network inspect addons-642189
	W1017 18:56:36.321773  497052 cli_runner.go:211] docker network inspect addons-642189 returned with exit code 1
	I1017 18:56:36.321839  497052 network_create.go:287] error running [docker network inspect addons-642189]: docker network inspect addons-642189: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-642189 not found
	I1017 18:56:36.321859  497052 network_create.go:289] output of [docker network inspect addons-642189]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-642189 not found
	
	** /stderr **
	I1017 18:56:36.321957  497052 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 18:56:36.339965  497052 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00162a6b0}
	I1017 18:56:36.340015  497052 network_create.go:124] attempt to create docker network addons-642189 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1017 18:56:36.340099  497052 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-642189 addons-642189
	I1017 18:56:36.401297  497052 network_create.go:108] docker network addons-642189 192.168.49.0/24 created
	I1017 18:56:36.401373  497052 kic.go:121] calculated static IP "192.168.49.2" for the "addons-642189" container
	I1017 18:56:36.401470  497052 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 18:56:36.418860  497052 cli_runner.go:164] Run: docker volume create addons-642189 --label name.minikube.sigs.k8s.io=addons-642189 --label created_by.minikube.sigs.k8s.io=true
	I1017 18:56:36.437865  497052 oci.go:103] Successfully created a docker volume addons-642189
	I1017 18:56:36.437964  497052 cli_runner.go:164] Run: docker run --rm --name addons-642189-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-642189 --entrypoint /usr/bin/test -v addons-642189:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 18:56:42.795042  497052 cli_runner.go:217] Completed: docker run --rm --name addons-642189-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-642189 --entrypoint /usr/bin/test -v addons-642189:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (6.357019308s)
	I1017 18:56:42.795116  497052 oci.go:107] Successfully prepared a docker volume addons-642189
	I1017 18:56:42.795165  497052 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 18:56:42.795197  497052 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 18:56:42.795275  497052 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-642189:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1017 18:56:47.272546  497052 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-642189:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.477214832s)
	I1017 18:56:47.272581  497052 kic.go:203] duration metric: took 4.477382627s to extract preloaded images to volume ...
	W1017 18:56:47.272678  497052 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1017 18:56:47.272749  497052 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1017 18:56:47.272791  497052 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 18:56:47.329001  497052 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-642189 --name addons-642189 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-642189 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-642189 --network addons-642189 --ip 192.168.49.2 --volume addons-642189:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 18:56:47.606142  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Running}}
	I1017 18:56:47.625600  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:56:47.644221  497052 cli_runner.go:164] Run: docker exec addons-642189 stat /var/lib/dpkg/alternatives/iptables
	I1017 18:56:47.691973  497052 oci.go:144] the created container "addons-642189" has a running status.
	I1017 18:56:47.692009  497052 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa...
	I1017 18:56:48.389604  497052 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 18:56:48.415717  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:56:48.434156  497052 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 18:56:48.434179  497052 kic_runner.go:114] Args: [docker exec --privileged addons-642189 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 18:56:48.478824  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:56:48.496258  497052 machine.go:93] provisionDockerMachine start ...
	I1017 18:56:48.496378  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:56:48.514690  497052 main.go:141] libmachine: Using SSH client type: native
	I1017 18:56:48.514942  497052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1017 18:56:48.514955  497052 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 18:56:48.649170  497052 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-642189
	
	I1017 18:56:48.649206  497052 ubuntu.go:182] provisioning hostname "addons-642189"
	I1017 18:56:48.649283  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:56:48.667879  497052 main.go:141] libmachine: Using SSH client type: native
	I1017 18:56:48.668109  497052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1017 18:56:48.668124  497052 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-642189 && echo "addons-642189" | sudo tee /etc/hostname
	I1017 18:56:48.812829  497052 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-642189
	
	I1017 18:56:48.812917  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:56:48.831243  497052 main.go:141] libmachine: Using SSH client type: native
	I1017 18:56:48.831518  497052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1017 18:56:48.831538  497052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-642189' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-642189/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-642189' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 18:56:48.965874  497052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 18:56:48.965935  497052 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-492109/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-492109/.minikube}
	I1017 18:56:48.965971  497052 ubuntu.go:190] setting up certificates
	I1017 18:56:48.965986  497052 provision.go:84] configureAuth start
	I1017 18:56:48.966061  497052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-642189
	I1017 18:56:48.985364  497052 provision.go:143] copyHostCerts
	I1017 18:56:48.985455  497052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem (1123 bytes)
	I1017 18:56:48.985568  497052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem (1679 bytes)
	I1017 18:56:48.985626  497052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem (1078 bytes)
	I1017 18:56:48.985697  497052 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem org=jenkins.addons-642189 san=[127.0.0.1 192.168.49.2 addons-642189 localhost minikube]
	I1017 18:56:49.161622  497052 provision.go:177] copyRemoteCerts
	I1017 18:56:49.161711  497052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 18:56:49.161762  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:56:49.180072  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:56:49.278715  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 18:56:49.299727  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 18:56:49.318591  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 18:56:49.337286  497052 provision.go:87] duration metric: took 371.279564ms to configureAuth
	I1017 18:56:49.337327  497052 ubuntu.go:206] setting minikube options for container-runtime
	I1017 18:56:49.337500  497052 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:56:49.337605  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:56:49.355870  497052 main.go:141] libmachine: Using SSH client type: native
	I1017 18:56:49.356105  497052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1017 18:56:49.356134  497052 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 18:56:49.609546  497052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 18:56:49.609576  497052 machine.go:96] duration metric: took 1.11329531s to provisionDockerMachine
	I1017 18:56:49.609590  497052 client.go:171] duration metric: took 13.968972026s to LocalClient.Create
	I1017 18:56:49.609616  497052 start.go:167] duration metric: took 13.9690511s to libmachine.API.Create "addons-642189"
	I1017 18:56:49.609626  497052 start.go:293] postStartSetup for "addons-642189" (driver="docker")
	I1017 18:56:49.609642  497052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 18:56:49.609734  497052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 18:56:49.609793  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:56:49.627788  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:56:49.727586  497052 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 18:56:49.731374  497052 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 18:56:49.731414  497052 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 18:56:49.731428  497052 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/addons for local assets ...
	I1017 18:56:49.731513  497052 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/files for local assets ...
	I1017 18:56:49.731556  497052 start.go:296] duration metric: took 121.92119ms for postStartSetup
	I1017 18:56:49.731923  497052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-642189
	I1017 18:56:49.749707  497052 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/config.json ...
	I1017 18:56:49.749992  497052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 18:56:49.750035  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:56:49.767939  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:56:49.862551  497052 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 18:56:49.867747  497052 start.go:128] duration metric: took 14.229404339s to createHost
	I1017 18:56:49.867776  497052 start.go:83] releasing machines lock for "addons-642189", held for 14.229555848s
	I1017 18:56:49.867846  497052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-642189
	I1017 18:56:49.886000  497052 ssh_runner.go:195] Run: cat /version.json
	I1017 18:56:49.886052  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:56:49.886108  497052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 18:56:49.886193  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:56:49.904941  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:56:49.904988  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:56:50.062813  497052 ssh_runner.go:195] Run: systemctl --version
	I1017 18:56:50.069615  497052 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 18:56:50.105769  497052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 18:56:50.110954  497052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 18:56:50.111020  497052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 18:56:50.139285  497052 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1017 18:56:50.139318  497052 start.go:495] detecting cgroup driver to use...
	I1017 18:56:50.139349  497052 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 18:56:50.139391  497052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 18:56:50.156632  497052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 18:56:50.169358  497052 docker.go:218] disabling cri-docker service (if available) ...
	I1017 18:56:50.169419  497052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 18:56:50.186340  497052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 18:56:50.204923  497052 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 18:56:50.283445  497052 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 18:56:50.377874  497052 docker.go:234] disabling docker service ...
	I1017 18:56:50.377953  497052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 18:56:50.397984  497052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 18:56:50.411531  497052 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 18:56:50.493875  497052 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 18:56:50.577910  497052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 18:56:50.591649  497052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 18:56:50.606805  497052 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 18:56:50.606879  497052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:56:50.618801  497052 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 18:56:50.618878  497052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:56:50.629437  497052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:56:50.638905  497052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:56:50.648869  497052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 18:56:50.657623  497052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:56:50.666930  497052 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:56:50.681247  497052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:56:50.690498  497052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 18:56:50.698638  497052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 18:56:50.706609  497052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 18:56:50.782771  497052 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 18:56:50.895648  497052 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 18:56:50.895749  497052 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 18:56:50.900095  497052 start.go:563] Will wait 60s for crictl version
	I1017 18:56:50.900162  497052 ssh_runner.go:195] Run: which crictl
	I1017 18:56:50.904255  497052 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 18:56:50.931013  497052 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 18:56:50.931112  497052 ssh_runner.go:195] Run: crio --version
	I1017 18:56:50.962129  497052 ssh_runner.go:195] Run: crio --version
	I1017 18:56:50.993567  497052 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 18:56:50.994865  497052 cli_runner.go:164] Run: docker network inspect addons-642189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 18:56:51.011944  497052 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 18:56:51.016337  497052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 18:56:51.027067  497052 kubeadm.go:883] updating cluster {Name:addons-642189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-642189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 18:56:51.027187  497052 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 18:56:51.027230  497052 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 18:56:51.060153  497052 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 18:56:51.060178  497052 crio.go:433] Images already preloaded, skipping extraction
	I1017 18:56:51.060225  497052 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 18:56:51.087679  497052 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 18:56:51.087734  497052 cache_images.go:85] Images are preloaded, skipping loading
	I1017 18:56:51.087744  497052 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1017 18:56:51.087877  497052 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-642189 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-642189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 18:56:51.087942  497052 ssh_runner.go:195] Run: crio config
	I1017 18:56:51.135280  497052 cni.go:84] Creating CNI manager for ""
	I1017 18:56:51.135306  497052 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 18:56:51.135326  497052 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 18:56:51.135353  497052 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-642189 NodeName:addons-642189 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 18:56:51.135496  497052 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-642189"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 18:56:51.135562  497052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 18:56:51.144131  497052 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 18:56:51.144235  497052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 18:56:51.153060  497052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 18:56:51.166714  497052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 18:56:51.183243  497052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1017 18:56:51.197036  497052 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1017 18:56:51.201059  497052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 18:56:51.211817  497052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 18:56:51.293428  497052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 18:56:51.321321  497052 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189 for IP: 192.168.49.2
	I1017 18:56:51.321354  497052 certs.go:195] generating shared ca certs ...
	I1017 18:56:51.321376  497052 certs.go:227] acquiring lock for ca certs: {Name:mkc97483d62151ba5c32d923dd19e3e2b3661468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:51.321514  497052 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key
	I1017 18:56:51.629873  497052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt ...
	I1017 18:56:51.629906  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt: {Name:mk440c0dfa16bb02464fbb467fa5aa87c3765bd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:51.630114  497052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key ...
	I1017 18:56:51.630126  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key: {Name:mkc9a271aa2bbc3358be01e9b4bce62869f1d064 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:51.630204  497052 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key
	I1017 18:56:51.740419  497052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.crt ...
	I1017 18:56:51.740450  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.crt: {Name:mkf94b45b8d9778becd2cdd6b12a0b633a9ae526 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:51.740620  497052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key ...
	I1017 18:56:51.740631  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key: {Name:mkce39cadc70eea20f0f21b9ae81efbd1f2d8303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:51.740714  497052 certs.go:257] generating profile certs ...
	I1017 18:56:51.740777  497052 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.key
	I1017 18:56:51.740792  497052 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt with IP's: []
	I1017 18:56:52.050258  497052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt ...
	I1017 18:56:52.050292  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: {Name:mk31ad05cd8e9966a999e9ce8772563fd937d0fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:52.050468  497052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.key ...
	I1017 18:56:52.050491  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.key: {Name:mkd8228d8fac04b24f141738f06daa560efd24a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:52.050573  497052 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.key.266c3263
	I1017 18:56:52.050592  497052 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.crt.266c3263 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1017 18:56:52.224447  497052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.crt.266c3263 ...
	I1017 18:56:52.224483  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.crt.266c3263: {Name:mkccccdd0383a0c5961d198a8ade089cc04198ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:52.224661  497052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.key.266c3263 ...
	I1017 18:56:52.224673  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.key.266c3263: {Name:mk6854fd0ecd7f2f485707f53b7d269e7aa49c9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:52.224757  497052 certs.go:382] copying /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.crt.266c3263 -> /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.crt
	I1017 18:56:52.224857  497052 certs.go:386] copying /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.key.266c3263 -> /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.key
	I1017 18:56:52.224915  497052 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/proxy-client.key
	I1017 18:56:52.224935  497052 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/proxy-client.crt with IP's: []
	I1017 18:56:52.486460  497052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/proxy-client.crt ...
	I1017 18:56:52.486493  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/proxy-client.crt: {Name:mk81e4a41268fac4df526b7a037b0d607ca1da79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:52.486661  497052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/proxy-client.key ...
	I1017 18:56:52.486675  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/proxy-client.key: {Name:mke25b84bb762b01365af8953171bb774daff27b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:52.486855  497052 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 18:56:52.486891  497052 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem (1078 bytes)
	I1017 18:56:52.486914  497052 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem (1123 bytes)
	I1017 18:56:52.486935  497052 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem (1679 bytes)
	I1017 18:56:52.487601  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 18:56:52.506255  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 18:56:52.524081  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 18:56:52.541915  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 18:56:52.560046  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1017 18:56:52.578133  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 18:56:52.596352  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 18:56:52.614649  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 18:56:52.632554  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 18:56:52.652890  497052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 18:56:52.666484  497052 ssh_runner.go:195] Run: openssl version
	I1017 18:56:52.672995  497052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 18:56:52.684809  497052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 18:56:52.688846  497052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1017 18:56:52.688926  497052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 18:56:52.723670  497052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 18:56:52.732978  497052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 18:56:52.736885  497052 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 18:56:52.736937  497052 kubeadm.go:400] StartCluster: {Name:addons-642189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-642189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 18:56:52.737016  497052 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 18:56:52.737064  497052 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 18:56:52.765597  497052 cri.go:89] found id: ""
	I1017 18:56:52.765695  497052 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 18:56:52.774301  497052 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 18:56:52.783040  497052 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 18:56:52.783112  497052 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 18:56:52.791264  497052 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 18:56:52.791291  497052 kubeadm.go:157] found existing configuration files:
	
	I1017 18:56:52.791341  497052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 18:56:52.799203  497052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 18:56:52.799279  497052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 18:56:52.806929  497052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 18:56:52.815246  497052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 18:56:52.815314  497052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 18:56:52.823193  497052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 18:56:52.831014  497052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 18:56:52.831078  497052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 18:56:52.838998  497052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 18:56:52.847468  497052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 18:56:52.847536  497052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 18:56:52.855379  497052 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 18:56:52.896194  497052 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 18:56:52.896314  497052 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 18:56:52.920241  497052 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 18:56:52.920364  497052 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1017 18:56:52.920449  497052 kubeadm.go:318] OS: Linux
	I1017 18:56:52.920551  497052 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 18:56:52.920638  497052 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 18:56:52.920753  497052 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 18:56:52.920827  497052 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 18:56:52.920894  497052 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 18:56:52.920960  497052 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 18:56:52.921021  497052 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 18:56:52.921103  497052 kubeadm.go:318] CGROUPS_IO: enabled
	I1017 18:56:52.986357  497052 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 18:56:52.986502  497052 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 18:56:52.986654  497052 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 18:56:52.995098  497052 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 18:56:52.997834  497052 out.go:252]   - Generating certificates and keys ...
	I1017 18:56:52.997958  497052 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 18:56:52.998028  497052 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 18:56:53.121595  497052 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 18:56:53.446235  497052 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 18:56:53.548350  497052 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 18:56:53.750146  497052 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 18:56:53.893103  497052 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 18:56:53.893244  497052 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-642189 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1017 18:56:54.008617  497052 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 18:56:54.008802  497052 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-642189 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1017 18:56:54.146010  497052 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 18:56:54.361105  497052 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 18:56:54.498218  497052 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 18:56:54.498326  497052 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 18:56:54.608762  497052 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 18:56:54.847587  497052 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 18:56:55.118157  497052 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 18:56:55.797864  497052 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 18:56:55.930740  497052 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 18:56:55.931303  497052 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 18:56:55.935312  497052 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 18:56:55.936953  497052 out.go:252]   - Booting up control plane ...
	I1017 18:56:55.937100  497052 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 18:56:55.937227  497052 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 18:56:55.937946  497052 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 18:56:55.953439  497052 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 18:56:55.953567  497052 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 18:56:55.960790  497052 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 18:56:55.960919  497052 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 18:56:55.960968  497052 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 18:56:56.058833  497052 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 18:56:56.059018  497052 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 18:56:56.559852  497052 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.233283ms
	I1017 18:56:56.563147  497052 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 18:56:56.563278  497052 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1017 18:56:56.563393  497052 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 18:56:56.563506  497052 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 18:56:57.580675  497052 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.017460176s
	I1017 18:56:58.636295  497052 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.073161983s
	I1017 18:57:00.565317  497052 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.002162474s
	I1017 18:57:00.577083  497052 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 18:57:00.588338  497052 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 18:57:00.597436  497052 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 18:57:00.597677  497052 kubeadm.go:318] [mark-control-plane] Marking the node addons-642189 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 18:57:00.605981  497052 kubeadm.go:318] [bootstrap-token] Using token: zu8ikn.cdwz8remj9o7hw3s
	I1017 18:57:00.607636  497052 out.go:252]   - Configuring RBAC rules ...
	I1017 18:57:00.607822  497052 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 18:57:00.611169  497052 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 18:57:00.617020  497052 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 18:57:00.620528  497052 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 18:57:00.623428  497052 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 18:57:00.626491  497052 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 18:57:00.972138  497052 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 18:57:01.389416  497052 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 18:57:01.970957  497052 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 18:57:01.971841  497052 kubeadm.go:318] 
	I1017 18:57:01.971927  497052 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 18:57:01.971940  497052 kubeadm.go:318] 
	I1017 18:57:01.972047  497052 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 18:57:01.972059  497052 kubeadm.go:318] 
	I1017 18:57:01.972093  497052 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 18:57:01.972181  497052 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 18:57:01.972280  497052 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 18:57:01.972301  497052 kubeadm.go:318] 
	I1017 18:57:01.972385  497052 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 18:57:01.972397  497052 kubeadm.go:318] 
	I1017 18:57:01.972465  497052 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 18:57:01.972477  497052 kubeadm.go:318] 
	I1017 18:57:01.972558  497052 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 18:57:01.972641  497052 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 18:57:01.972747  497052 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 18:57:01.972758  497052 kubeadm.go:318] 
	I1017 18:57:01.972864  497052 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 18:57:01.972953  497052 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 18:57:01.972958  497052 kubeadm.go:318] 
	I1017 18:57:01.973064  497052 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token zu8ikn.cdwz8remj9o7hw3s \
	I1017 18:57:01.973201  497052 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ae4b222593b9932ac318f80ad834fe09d4c8ed481133016b5c410bf2757b648e \
	I1017 18:57:01.973229  497052 kubeadm.go:318] 	--control-plane 
	I1017 18:57:01.973237  497052 kubeadm.go:318] 
	I1017 18:57:01.973333  497052 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 18:57:01.973340  497052 kubeadm.go:318] 
	I1017 18:57:01.973444  497052 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token zu8ikn.cdwz8remj9o7hw3s \
	I1017 18:57:01.973586  497052 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ae4b222593b9932ac318f80ad834fe09d4c8ed481133016b5c410bf2757b648e 
	I1017 18:57:01.976079  497052 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1017 18:57:01.976244  497052 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 18:57:01.976272  497052 cni.go:84] Creating CNI manager for ""
	I1017 18:57:01.976286  497052 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 18:57:01.978848  497052 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 18:57:01.980075  497052 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 18:57:01.984608  497052 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 18:57:01.984626  497052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 18:57:01.998020  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 18:57:02.212363  497052 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 18:57:02.212424  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:02.212473  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-642189 minikube.k8s.io/updated_at=2025_10_17T18_57_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d minikube.k8s.io/name=addons-642189 minikube.k8s.io/primary=true
	I1017 18:57:02.222935  497052 ops.go:34] apiserver oom_adj: -16
	I1017 18:57:02.292126  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:02.793222  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:03.292554  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:03.792902  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:04.292613  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:04.792220  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:05.292263  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:05.792943  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:06.293220  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:06.792903  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:06.872377  497052 kubeadm.go:1113] duration metric: took 4.660013134s to wait for elevateKubeSystemPrivileges
	I1017 18:57:06.872408  497052 kubeadm.go:402] duration metric: took 14.135475724s to StartCluster
	I1017 18:57:06.872427  497052 settings.go:142] acquiring lock: {Name:mkb8ebc6edbbb6915dd74086f502bcc2721555a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:06.872562  497052 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 18:57:06.873086  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/kubeconfig: {Name:mkc99c1a086f83f30612e2820a6063c20b9217b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:06.873343  497052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 18:57:06.873379  497052 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 18:57:06.873470  497052 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1017 18:57:06.873624  497052 addons.go:69] Setting yakd=true in profile "addons-642189"
	I1017 18:57:06.873666  497052 addons.go:238] Setting addon yakd=true in "addons-642189"
	I1017 18:57:06.873674  497052 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:57:06.873736  497052 addons.go:69] Setting gcp-auth=true in profile "addons-642189"
	I1017 18:57:06.873748  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.873759  497052 mustload.go:65] Loading cluster: addons-642189
	I1017 18:57:06.873669  497052 addons.go:69] Setting inspektor-gadget=true in profile "addons-642189"
	I1017 18:57:06.873781  497052 addons.go:238] Setting addon inspektor-gadget=true in "addons-642189"
	I1017 18:57:06.873787  497052 addons.go:69] Setting ingress=true in profile "addons-642189"
	I1017 18:57:06.873813  497052 addons.go:238] Setting addon ingress=true in "addons-642189"
	I1017 18:57:06.873814  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.873866  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.873868  497052 addons.go:69] Setting ingress-dns=true in profile "addons-642189"
	I1017 18:57:06.873895  497052 addons.go:238] Setting addon ingress-dns=true in "addons-642189"
	I1017 18:57:06.873930  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.873985  497052 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:57:06.874148  497052 addons.go:69] Setting cloud-spanner=true in profile "addons-642189"
	I1017 18:57:06.874169  497052 addons.go:238] Setting addon cloud-spanner=true in "addons-642189"
	I1017 18:57:06.874204  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.874344  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.874363  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.874376  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.874381  497052 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-642189"
	I1017 18:57:06.874398  497052 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-642189"
	I1017 18:57:06.874415  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.874420  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.874660  497052 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-642189"
	I1017 18:57:06.874767  497052 addons.go:69] Setting registry-creds=true in profile "addons-642189"
	I1017 18:57:06.874795  497052 addons.go:238] Setting addon registry-creds=true in "addons-642189"
	I1017 18:57:06.874819  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.874843  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.874771  497052 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-642189"
	I1017 18:57:06.875010  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.875015  497052 addons.go:69] Setting default-storageclass=true in profile "addons-642189"
	I1017 18:57:06.875058  497052 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-642189"
	I1017 18:57:06.875095  497052 addons.go:69] Setting storage-provisioner=true in profile "addons-642189"
	I1017 18:57:06.875108  497052 addons.go:238] Setting addon storage-provisioner=true in "addons-642189"
	I1017 18:57:06.875128  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.875246  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.875328  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.875862  497052 out.go:179] * Verifying Kubernetes components...
	I1017 18:57:06.875944  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.876634  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.877017  497052 addons.go:69] Setting volumesnapshots=true in profile "addons-642189"
	I1017 18:57:06.877039  497052 addons.go:238] Setting addon volumesnapshots=true in "addons-642189"
	I1017 18:57:06.877066  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.877528  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.877637  497052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 18:57:06.877721  497052 addons.go:69] Setting metrics-server=true in profile "addons-642189"
	I1017 18:57:06.877739  497052 addons.go:238] Setting addon metrics-server=true in "addons-642189"
	I1017 18:57:06.877750  497052 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-642189"
	I1017 18:57:06.877769  497052 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-642189"
	I1017 18:57:06.877776  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.878061  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.878272  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.878831  497052 addons.go:69] Setting volcano=true in profile "addons-642189"
	I1017 18:57:06.878911  497052 addons.go:238] Setting addon volcano=true in "addons-642189"
	I1017 18:57:06.878945  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.879404  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.879707  497052 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-642189"
	I1017 18:57:06.879734  497052 addons.go:69] Setting registry=true in profile "addons-642189"
	I1017 18:57:06.879748  497052 addons.go:238] Setting addon registry=true in "addons-642189"
	I1017 18:57:06.879754  497052 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-642189"
	I1017 18:57:06.879780  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.879791  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.874363  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.897408  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.897926  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.898047  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.919232  497052 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1017 18:57:06.920916  497052 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1017 18:57:06.920943  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1017 18:57:06.921010  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:06.929739  497052 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1017 18:57:06.931521  497052 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1017 18:57:06.931552  497052 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1017 18:57:06.931635  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:06.932853  497052 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1017 18:57:06.934471  497052 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1017 18:57:06.936749  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1017 18:57:06.936884  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:06.940591  497052 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1017 18:57:06.940860  497052 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1017 18:57:06.942508  497052 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1017 18:57:06.942533  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1017 18:57:06.942601  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:06.943015  497052 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1017 18:57:06.943034  497052 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1017 18:57:06.943206  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:06.945078  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.952255  497052 addons.go:238] Setting addon default-storageclass=true in "addons-642189"
	I1017 18:57:06.952323  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.953224  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.961703  497052 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1017 18:57:06.963163  497052 out.go:179]   - Using image docker.io/registry:3.0.0
	I1017 18:57:06.964250  497052 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1017 18:57:06.964518  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1017 18:57:06.965016  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:06.972529  497052 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1017 18:57:06.974789  497052 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1017 18:57:06.974810  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1017 18:57:06.974872  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:06.997260  497052 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-642189"
	I1017 18:57:06.997324  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.997622  497052 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 18:57:06.997861  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:07.002754  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.002810  497052 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1017 18:57:07.002857  497052 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 18:57:07.002873  497052 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 18:57:07.002936  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:07.002940  497052 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 18:57:07.002952  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 18:57:07.003014  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:07.004419  497052 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1017 18:57:07.004442  497052 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1017 18:57:07.004501  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:07.009330  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.016524  497052 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	W1017 18:57:07.017881  497052 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1017 18:57:07.022332  497052 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1017 18:57:07.022418  497052 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1017 18:57:07.023826  497052 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 18:57:07.023962  497052 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1017 18:57:07.026785  497052 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 18:57:07.026937  497052 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1017 18:57:07.027184  497052 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1017 18:57:07.029327  497052 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1017 18:57:07.029482  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1017 18:57:07.029740  497052 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1017 18:57:07.029762  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1017 18:57:07.029863  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:07.029870  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:07.033494  497052 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1017 18:57:07.034464  497052 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1017 18:57:07.036340  497052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1017 18:57:07.036362  497052 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1017 18:57:07.036428  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:07.036634  497052 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1017 18:57:07.042309  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.044540  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.046265  497052 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1017 18:57:07.047646  497052 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1017 18:57:07.048972  497052 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1017 18:57:07.049001  497052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1017 18:57:07.049076  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:07.063893  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.066625  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.069757  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.071756  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.075808  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.077102  497052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 18:57:07.078559  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.081097  497052 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1017 18:57:07.082439  497052 out.go:179]   - Using image docker.io/busybox:stable
	I1017 18:57:07.084299  497052 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1017 18:57:07.084506  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1017 18:57:07.084636  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:07.108078  497052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 18:57:07.109571  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.110794  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.115960  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	W1017 18:57:07.118479  497052 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1017 18:57:07.118525  497052 retry.go:31] will retry after 298.894403ms: ssh: handshake failed: EOF
	I1017 18:57:07.125416  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.133712  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.210395  497052 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1017 18:57:07.210429  497052 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1017 18:57:07.223514  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1017 18:57:07.224774  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1017 18:57:07.233708  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 18:57:07.234161  497052 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1017 18:57:07.234183  497052 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1017 18:57:07.250830  497052 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1017 18:57:07.250864  497052 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1017 18:57:07.254519  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1017 18:57:07.260703  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1017 18:57:07.273146  497052 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1017 18:57:07.273174  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1017 18:57:07.274451  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1017 18:57:07.277404  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 18:57:07.288678  497052 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1017 18:57:07.288721  497052 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1017 18:57:07.296406  497052 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:07.296437  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1017 18:57:07.311975  497052 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1017 18:57:07.312079  497052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1017 18:57:07.316400  497052 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1017 18:57:07.316539  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1017 18:57:07.329960  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:07.330065  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1017 18:57:07.332272  497052 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1017 18:57:07.332294  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1017 18:57:07.335379  497052 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1017 18:57:07.335403  497052 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1017 18:57:07.350248  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1017 18:57:07.350651  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1017 18:57:07.365424  497052 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1017 18:57:07.365473  497052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1017 18:57:07.390354  497052 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1017 18:57:07.390389  497052 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1017 18:57:07.398787  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1017 18:57:07.405249  497052 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1017 18:57:07.405281  497052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1017 18:57:07.463232  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1017 18:57:07.468750  497052 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1017 18:57:07.468782  497052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1017 18:57:07.497985  497052 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1017 18:57:07.500656  497052 node_ready.go:35] waiting up to 6m0s for node "addons-642189" to be "Ready" ...
	I1017 18:57:07.523654  497052 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1017 18:57:07.523698  497052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1017 18:57:07.589775  497052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1017 18:57:07.589802  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1017 18:57:07.638225  497052 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1017 18:57:07.638338  497052 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1017 18:57:07.678653  497052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1017 18:57:07.678869  497052 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1017 18:57:07.703316  497052 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1017 18:57:07.703432  497052 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1017 18:57:07.776584  497052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1017 18:57:07.776657  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1017 18:57:07.778449  497052 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1017 18:57:07.778473  497052 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1017 18:57:07.817885  497052 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1017 18:57:07.817924  497052 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1017 18:57:07.835494  497052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1017 18:57:07.835521  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1017 18:57:07.853021  497052 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 18:57:07.853056  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1017 18:57:07.870148  497052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1017 18:57:07.870183  497052 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1017 18:57:07.899178  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 18:57:07.915465  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1017 18:57:08.016285  497052 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-642189" context rescaled to 1 replicas
	I1017 18:57:08.515729  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.241231003s)
	I1017 18:57:08.515783  497052 addons.go:479] Verifying addon ingress=true in "addons-642189"
	I1017 18:57:08.515822  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.238387018s)
	I1017 18:57:08.515929  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.185932503s)
	W1017 18:57:08.515968  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:08.515989  497052 retry.go:31] will retry after 374.51806ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:08.515995  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.185905685s)
	I1017 18:57:08.516062  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.165771811s)
	I1017 18:57:08.516085  497052 addons.go:479] Verifying addon registry=true in "addons-642189"
	I1017 18:57:08.516192  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.117371509s)
	I1017 18:57:08.516135  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.16546121s)
	I1017 18:57:08.516286  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.053012277s)
	I1017 18:57:08.516314  497052 addons.go:479] Verifying addon metrics-server=true in "addons-642189"
	I1017 18:57:08.517438  497052 out.go:179] * Verifying ingress addon...
	I1017 18:57:08.518446  497052 out.go:179] * Verifying registry addon...
	I1017 18:57:08.518445  497052 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-642189 service yakd-dashboard -n yakd-dashboard
	
	I1017 18:57:08.520136  497052 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1017 18:57:08.521074  497052 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1017 18:57:08.523652  497052 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1017 18:57:08.523671  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:08.523774  497052 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1017 18:57:08.523794  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:08.891254  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:09.023998  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:09.024271  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:09.072137  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.172899155s)
	W1017 18:57:09.072185  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1017 18:57:09.072213  497052 retry.go:31] will retry after 249.396086ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1017 18:57:09.072406  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.156874885s)
	I1017 18:57:09.072448  497052 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-642189"
	I1017 18:57:09.073971  497052 out.go:179] * Verifying csi-hostpath-driver addon...
	I1017 18:57:09.076827  497052 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1017 18:57:09.083764  497052 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1017 18:57:09.083786  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:09.322421  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1017 18:57:09.504799  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:09.524220  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:09.524410  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1017 18:57:09.534922  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:09.534956  497052 retry.go:31] will retry after 539.222379ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:09.624635  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:10.025270  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:10.025472  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:10.074521  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:10.126795  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:10.524482  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:10.524664  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:10.625748  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:11.024171  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:11.024337  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:11.080213  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:11.524312  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:11.524432  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:11.625251  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:11.865006  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.542533999s)
	I1017 18:57:11.865163  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.790583534s)
	W1017 18:57:11.865205  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:11.865223  497052 retry.go:31] will retry after 428.934292ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 18:57:12.004365  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:12.023770  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:12.023808  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:12.080967  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:12.295308  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:12.524587  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:12.524761  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:12.625464  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:57:12.858972  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:12.859016  497052 retry.go:31] will retry after 936.652695ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:13.023864  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:13.024037  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:13.080884  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:13.524452  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:13.524625  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:13.625057  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:13.796107  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1017 18:57:14.004844  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:14.025283  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:14.025448  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:14.080375  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:57:14.362485  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:14.362520  497052 retry.go:31] will retry after 1.406793949s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:14.524067  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:14.524160  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:14.559567  497052 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1017 18:57:14.559633  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:14.579233  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:14.625109  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:14.684495  497052 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1017 18:57:14.698651  497052 addons.go:238] Setting addon gcp-auth=true in "addons-642189"
	I1017 18:57:14.698731  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:14.699124  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:14.718346  497052 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1017 18:57:14.718403  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:14.737635  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:14.833229  497052 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 18:57:14.834445  497052 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1017 18:57:14.835499  497052 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1017 18:57:14.835518  497052 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1017 18:57:14.849935  497052 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1017 18:57:14.849967  497052 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1017 18:57:14.863559  497052 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1017 18:57:14.863585  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1017 18:57:14.877534  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1017 18:57:15.023466  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:15.024197  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:15.080355  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:15.207628  497052 addons.go:479] Verifying addon gcp-auth=true in "addons-642189"
	I1017 18:57:15.209936  497052 out.go:179] * Verifying gcp-auth addon...
	I1017 18:57:15.211922  497052 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1017 18:57:15.214515  497052 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1017 18:57:15.214540  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:15.523467  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:15.523811  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:15.580758  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:15.715890  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:15.770035  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:16.023838  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:16.024336  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:16.080438  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:16.216000  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:16.338218  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:16.338253  497052 retry.go:31] will retry after 2.303801595s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 18:57:16.504159  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:16.524422  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:16.524619  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:16.580711  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:16.715414  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:17.023865  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:17.023968  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:17.079798  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:17.216039  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:17.524165  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:17.524193  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:17.580090  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:17.715085  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:18.023971  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:18.024144  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:18.080129  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:18.215498  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:18.504486  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:18.524330  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:18.524555  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:18.580587  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:18.642741  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:18.715951  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:19.024798  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:19.024882  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:19.080294  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:19.215790  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:19.227321  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:19.227354  497052 retry.go:31] will retry after 3.672326615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:19.524111  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:19.524564  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:19.580795  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:19.715779  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:20.023847  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:20.024186  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:20.080231  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:20.216212  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:20.524711  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:20.524839  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:20.580856  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:20.715801  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:21.003671  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:21.023927  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:21.024167  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:21.080245  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:21.215742  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:21.524190  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:21.524193  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:21.579971  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:21.715317  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:22.023848  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:22.024229  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:22.080432  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:22.215614  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:22.524461  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:22.524537  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:22.580766  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:22.715546  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:22.900896  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1017 18:57:23.004894  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:23.024157  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:23.024312  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:23.080071  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:23.215146  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:23.477571  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:23.477610  497052 retry.go:31] will retry after 4.189491628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:23.524141  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:23.524182  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:23.580394  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:23.715181  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:24.023821  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:24.024052  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:24.080012  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:24.215521  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:24.523987  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:24.524128  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:24.580477  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:24.715505  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:25.023819  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:25.023936  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:25.080658  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:25.215752  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:25.504183  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:25.524016  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:25.524303  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:25.580200  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:25.714994  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:26.024295  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:26.024438  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:26.080210  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:26.215880  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:26.524463  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:26.524738  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:26.580757  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:26.715596  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:27.023921  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:27.023947  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:27.080925  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:27.216164  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:27.504598  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:27.523499  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:27.523924  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:27.580107  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:27.668243  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:27.715749  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:28.024160  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:28.024263  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:28.080326  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:28.215167  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:28.240817  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:28.240856  497052 retry.go:31] will retry after 7.578900836s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:28.524716  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:28.524737  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:28.580418  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:28.715131  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:29.024361  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:29.024361  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:29.080145  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:29.215004  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:29.523591  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:29.523928  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:29.579844  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:29.716107  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:30.003962  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:30.023921  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:30.024112  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:30.080909  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:30.216485  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:30.524412  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:30.524621  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:30.580228  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:30.715284  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:31.023877  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:31.024146  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:31.080099  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:31.215039  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:31.523839  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:31.524125  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:31.580065  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:31.714930  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:32.024406  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:32.024482  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:32.125493  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:32.215817  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:32.503938  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:32.524589  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:32.524654  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:32.580605  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:32.715274  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:33.023722  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:33.024149  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:33.080246  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:33.215820  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:33.524333  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:33.524371  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:33.580398  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:33.715635  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:34.023932  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:34.024402  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:34.080367  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:34.215529  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:34.504708  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:34.524093  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:34.524098  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:34.580333  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:34.715532  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:35.024123  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:35.024351  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:35.080239  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:35.215351  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:35.523629  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:35.523935  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:35.580045  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:35.715612  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:35.820750  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:36.023965  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:36.024021  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:36.080801  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:36.215389  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:36.403637  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:36.403667  497052 retry.go:31] will retry after 9.094163433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:36.524205  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:36.524424  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:36.580173  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:36.715363  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:37.003762  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:37.023878  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:37.023966  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:37.080872  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:37.216214  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:37.523970  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:37.524241  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:37.580232  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:37.714997  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:38.023968  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:38.024177  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:38.079926  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:38.216049  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:38.524034  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:38.524105  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:38.579976  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:38.714937  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:39.023399  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:39.024311  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:39.080235  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:39.216131  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:39.504158  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:39.523990  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:39.524189  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:39.580047  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:39.715257  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:40.023393  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:40.023976  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:40.080084  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:40.215485  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:40.523925  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:40.524013  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:40.580323  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:40.715395  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:41.024104  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:41.024245  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:41.080178  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:41.215309  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:41.504619  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:41.524016  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:41.524317  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:41.580357  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:41.715073  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:42.023879  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:42.023903  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:42.080873  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:42.216511  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:42.524217  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:42.524313  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:42.580369  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:42.715435  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:43.023415  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:43.023776  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:43.080760  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:43.215974  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:43.523634  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:43.523962  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:43.579919  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:43.715519  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:44.004772  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:44.023853  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:44.024171  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:44.080259  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:44.215437  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:44.524032  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:44.524075  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:44.580173  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:44.714965  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:45.023916  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:45.023965  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:45.080809  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:45.215601  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:45.498918  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:45.523981  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:45.524351  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:45.580212  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:45.715009  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:46.023052  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:46.023708  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1017 18:57:46.062320  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:46.062355  497052 retry.go:31] will retry after 12.563757691s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:46.080166  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:46.215258  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:46.504513  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:46.523648  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:46.524135  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:46.580068  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:46.714843  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:47.024148  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:47.024242  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:47.079586  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:47.215771  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:47.502971  497052 node_ready.go:49] node "addons-642189" is "Ready"
	I1017 18:57:47.503008  497052 node_ready.go:38] duration metric: took 40.002301943s for node "addons-642189" to be "Ready" ...
	I1017 18:57:47.503027  497052 api_server.go:52] waiting for apiserver process to appear ...
	I1017 18:57:47.503088  497052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 18:57:47.521624  497052 api_server.go:72] duration metric: took 40.648204445s to wait for apiserver process to appear ...
	I1017 18:57:47.521653  497052 api_server.go:88] waiting for apiserver healthz status ...
	I1017 18:57:47.521676  497052 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1017 18:57:47.523626  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:47.523907  497052 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1017 18:57:47.523928  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:47.528891  497052 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1017 18:57:47.529870  497052 api_server.go:141] control plane version: v1.34.1
	I1017 18:57:47.529904  497052 api_server.go:131] duration metric: took 8.243043ms to wait for apiserver health ...
	I1017 18:57:47.529916  497052 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 18:57:47.534863  497052 system_pods.go:59] 20 kube-system pods found
	I1017 18:57:47.534901  497052 system_pods.go:61] "amd-gpu-device-plugin-t48xm" [3156d3f4-4196-443e-86ea-eb10fdc988bc] Pending
	I1017 18:57:47.534916  497052 system_pods.go:61] "coredns-66bc5c9577-9qzb6" [fac124c4-9636-4867-b8d6-b85ace3157be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 18:57:47.534923  497052 system_pods.go:61] "csi-hostpath-attacher-0" [ff1154c7-8dcf-4784-aeb0-4b7f71b610d8] Pending
	I1017 18:57:47.534935  497052 system_pods.go:61] "csi-hostpath-resizer-0" [5848a585-6545-4769-aef8-eece82ad7a3e] Pending
	I1017 18:57:47.534940  497052 system_pods.go:61] "csi-hostpathplugin-5kdtq" [51ff254c-6eca-4206-bc0d-d45c02ee3e01] Pending
	I1017 18:57:47.534946  497052 system_pods.go:61] "etcd-addons-642189" [19dd00f5-11cf-4bcb-8d15-81fdee0122ac] Running
	I1017 18:57:47.534961  497052 system_pods.go:61] "kindnet-6gk89" [fa4d48ce-32f6-4a29-a643-adf89425fb2d] Running
	I1017 18:57:47.534966  497052 system_pods.go:61] "kube-apiserver-addons-642189" [1416f756-9377-46ae-8c1e-89cad4fc1c3d] Running
	I1017 18:57:47.534978  497052 system_pods.go:61] "kube-controller-manager-addons-642189" [8db3ab0c-4f17-48cc-9e53-5522c8f070d5] Running
	I1017 18:57:47.534988  497052 system_pods.go:61] "kube-ingress-dns-minikube" [f8388279-4ec9-4e98-9cd9-b8d496b5d57a] Pending
	I1017 18:57:47.534992  497052 system_pods.go:61] "kube-proxy-n4pk6" [72dac253-09fc-4aa9-aed7-196eed4d49e7] Running
	I1017 18:57:47.535001  497052 system_pods.go:61] "kube-scheduler-addons-642189" [26a48cb9-6a80-4c21-b965-a2dec20ca37d] Running
	I1017 18:57:47.535009  497052 system_pods.go:61] "metrics-server-85b7d694d7-7d6xn" [3877854d-d5e2-4181-ba78-988a54712111] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:57:47.535020  497052 system_pods.go:61] "nvidia-device-plugin-daemonset-5272k" [f201ab4f-abad-46f2-a109-95004c7250f7] Pending
	I1017 18:57:47.535031  497052 system_pods.go:61] "registry-6b586f9694-gfg4q" [f3780320-4513-4f0c-a613-2e6dae9f1050] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 18:57:47.535038  497052 system_pods.go:61] "registry-creds-764b6fb674-wpqx2" [ff764293-9993-42e2-aed2-de34ffce5c63] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 18:57:47.535044  497052 system_pods.go:61] "registry-proxy-7wchq" [ba24cd6f-ac09-4d7a-8504-fc72367cd2c3] Pending
	I1017 18:57:47.535053  497052 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qxcgb" [907c8bda-b107-4358-b274-36307a0e95d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:57:47.535059  497052 system_pods.go:61] "snapshot-controller-7d9fbc56b8-x4f9r" [8bad8697-4458-4007-beb2-6ee425032923] Pending
	I1017 18:57:47.535067  497052 system_pods.go:61] "storage-provisioner" [6b2b7583-da33-4e05-bf2a-75ac8e369265] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 18:57:47.535076  497052 system_pods.go:74] duration metric: took 5.152079ms to wait for pod list to return data ...
	I1017 18:57:47.535100  497052 default_sa.go:34] waiting for default service account to be created ...
	I1017 18:57:47.537220  497052 default_sa.go:45] found service account: "default"
	I1017 18:57:47.537244  497052 default_sa.go:55] duration metric: took 2.136658ms for default service account to be created ...
	I1017 18:57:47.537254  497052 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 18:57:47.542497  497052 system_pods.go:86] 20 kube-system pods found
	I1017 18:57:47.542527  497052 system_pods.go:89] "amd-gpu-device-plugin-t48xm" [3156d3f4-4196-443e-86ea-eb10fdc988bc] Pending
	I1017 18:57:47.542536  497052 system_pods.go:89] "coredns-66bc5c9577-9qzb6" [fac124c4-9636-4867-b8d6-b85ace3157be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 18:57:47.542541  497052 system_pods.go:89] "csi-hostpath-attacher-0" [ff1154c7-8dcf-4784-aeb0-4b7f71b610d8] Pending
	I1017 18:57:47.542547  497052 system_pods.go:89] "csi-hostpath-resizer-0" [5848a585-6545-4769-aef8-eece82ad7a3e] Pending
	I1017 18:57:47.542550  497052 system_pods.go:89] "csi-hostpathplugin-5kdtq" [51ff254c-6eca-4206-bc0d-d45c02ee3e01] Pending
	I1017 18:57:47.542553  497052 system_pods.go:89] "etcd-addons-642189" [19dd00f5-11cf-4bcb-8d15-81fdee0122ac] Running
	I1017 18:57:47.542556  497052 system_pods.go:89] "kindnet-6gk89" [fa4d48ce-32f6-4a29-a643-adf89425fb2d] Running
	I1017 18:57:47.542560  497052 system_pods.go:89] "kube-apiserver-addons-642189" [1416f756-9377-46ae-8c1e-89cad4fc1c3d] Running
	I1017 18:57:47.542565  497052 system_pods.go:89] "kube-controller-manager-addons-642189" [8db3ab0c-4f17-48cc-9e53-5522c8f070d5] Running
	I1017 18:57:47.542572  497052 system_pods.go:89] "kube-ingress-dns-minikube" [f8388279-4ec9-4e98-9cd9-b8d496b5d57a] Pending
	I1017 18:57:47.542578  497052 system_pods.go:89] "kube-proxy-n4pk6" [72dac253-09fc-4aa9-aed7-196eed4d49e7] Running
	I1017 18:57:47.542584  497052 system_pods.go:89] "kube-scheduler-addons-642189" [26a48cb9-6a80-4c21-b965-a2dec20ca37d] Running
	I1017 18:57:47.542596  497052 system_pods.go:89] "metrics-server-85b7d694d7-7d6xn" [3877854d-d5e2-4181-ba78-988a54712111] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:57:47.542604  497052 system_pods.go:89] "nvidia-device-plugin-daemonset-5272k" [f201ab4f-abad-46f2-a109-95004c7250f7] Pending
	I1017 18:57:47.542612  497052 system_pods.go:89] "registry-6b586f9694-gfg4q" [f3780320-4513-4f0c-a613-2e6dae9f1050] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 18:57:47.542621  497052 system_pods.go:89] "registry-creds-764b6fb674-wpqx2" [ff764293-9993-42e2-aed2-de34ffce5c63] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 18:57:47.542625  497052 system_pods.go:89] "registry-proxy-7wchq" [ba24cd6f-ac09-4d7a-8504-fc72367cd2c3] Pending
	I1017 18:57:47.542635  497052 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qxcgb" [907c8bda-b107-4358-b274-36307a0e95d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:57:47.542646  497052 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x4f9r" [8bad8697-4458-4007-beb2-6ee425032923] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:57:47.542654  497052 system_pods.go:89] "storage-provisioner" [6b2b7583-da33-4e05-bf2a-75ac8e369265] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 18:57:47.542715  497052 retry.go:31] will retry after 269.224857ms: missing components: kube-dns
	I1017 18:57:47.589834  497052 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1017 18:57:47.589863  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:47.715757  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:47.818157  497052 system_pods.go:86] 20 kube-system pods found
	I1017 18:57:47.818367  497052 system_pods.go:89] "amd-gpu-device-plugin-t48xm" [3156d3f4-4196-443e-86ea-eb10fdc988bc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1017 18:57:47.818390  497052 system_pods.go:89] "coredns-66bc5c9577-9qzb6" [fac124c4-9636-4867-b8d6-b85ace3157be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 18:57:47.818404  497052 system_pods.go:89] "csi-hostpath-attacher-0" [ff1154c7-8dcf-4784-aeb0-4b7f71b610d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 18:57:47.818415  497052 system_pods.go:89] "csi-hostpath-resizer-0" [5848a585-6545-4769-aef8-eece82ad7a3e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 18:57:47.818424  497052 system_pods.go:89] "csi-hostpathplugin-5kdtq" [51ff254c-6eca-4206-bc0d-d45c02ee3e01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 18:57:47.818435  497052 system_pods.go:89] "etcd-addons-642189" [19dd00f5-11cf-4bcb-8d15-81fdee0122ac] Running
	I1017 18:57:47.818442  497052 system_pods.go:89] "kindnet-6gk89" [fa4d48ce-32f6-4a29-a643-adf89425fb2d] Running
	I1017 18:57:47.818448  497052 system_pods.go:89] "kube-apiserver-addons-642189" [1416f756-9377-46ae-8c1e-89cad4fc1c3d] Running
	I1017 18:57:47.818456  497052 system_pods.go:89] "kube-controller-manager-addons-642189" [8db3ab0c-4f17-48cc-9e53-5522c8f070d5] Running
	I1017 18:57:47.818465  497052 system_pods.go:89] "kube-ingress-dns-minikube" [f8388279-4ec9-4e98-9cd9-b8d496b5d57a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 18:57:47.818473  497052 system_pods.go:89] "kube-proxy-n4pk6" [72dac253-09fc-4aa9-aed7-196eed4d49e7] Running
	I1017 18:57:47.818480  497052 system_pods.go:89] "kube-scheduler-addons-642189" [26a48cb9-6a80-4c21-b965-a2dec20ca37d] Running
	I1017 18:57:47.818498  497052 system_pods.go:89] "metrics-server-85b7d694d7-7d6xn" [3877854d-d5e2-4181-ba78-988a54712111] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:57:47.818507  497052 system_pods.go:89] "nvidia-device-plugin-daemonset-5272k" [f201ab4f-abad-46f2-a109-95004c7250f7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 18:57:47.818516  497052 system_pods.go:89] "registry-6b586f9694-gfg4q" [f3780320-4513-4f0c-a613-2e6dae9f1050] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 18:57:47.818528  497052 system_pods.go:89] "registry-creds-764b6fb674-wpqx2" [ff764293-9993-42e2-aed2-de34ffce5c63] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 18:57:47.818552  497052 system_pods.go:89] "registry-proxy-7wchq" [ba24cd6f-ac09-4d7a-8504-fc72367cd2c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 18:57:47.818564  497052 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qxcgb" [907c8bda-b107-4358-b274-36307a0e95d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:57:47.818575  497052 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x4f9r" [8bad8697-4458-4007-beb2-6ee425032923] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:57:47.818584  497052 system_pods.go:89] "storage-provisioner" [6b2b7583-da33-4e05-bf2a-75ac8e369265] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 18:57:47.818609  497052 retry.go:31] will retry after 250.033006ms: missing components: kube-dns
	I1017 18:57:48.023864  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:48.023905  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:48.073068  497052 system_pods.go:86] 20 kube-system pods found
	I1017 18:57:48.073106  497052 system_pods.go:89] "amd-gpu-device-plugin-t48xm" [3156d3f4-4196-443e-86ea-eb10fdc988bc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1017 18:57:48.073114  497052 system_pods.go:89] "coredns-66bc5c9577-9qzb6" [fac124c4-9636-4867-b8d6-b85ace3157be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 18:57:48.073122  497052 system_pods.go:89] "csi-hostpath-attacher-0" [ff1154c7-8dcf-4784-aeb0-4b7f71b610d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 18:57:48.073128  497052 system_pods.go:89] "csi-hostpath-resizer-0" [5848a585-6545-4769-aef8-eece82ad7a3e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 18:57:48.073135  497052 system_pods.go:89] "csi-hostpathplugin-5kdtq" [51ff254c-6eca-4206-bc0d-d45c02ee3e01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 18:57:48.073139  497052 system_pods.go:89] "etcd-addons-642189" [19dd00f5-11cf-4bcb-8d15-81fdee0122ac] Running
	I1017 18:57:48.073143  497052 system_pods.go:89] "kindnet-6gk89" [fa4d48ce-32f6-4a29-a643-adf89425fb2d] Running
	I1017 18:57:48.073147  497052 system_pods.go:89] "kube-apiserver-addons-642189" [1416f756-9377-46ae-8c1e-89cad4fc1c3d] Running
	I1017 18:57:48.073150  497052 system_pods.go:89] "kube-controller-manager-addons-642189" [8db3ab0c-4f17-48cc-9e53-5522c8f070d5] Running
	I1017 18:57:48.073155  497052 system_pods.go:89] "kube-ingress-dns-minikube" [f8388279-4ec9-4e98-9cd9-b8d496b5d57a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 18:57:48.073158  497052 system_pods.go:89] "kube-proxy-n4pk6" [72dac253-09fc-4aa9-aed7-196eed4d49e7] Running
	I1017 18:57:48.073162  497052 system_pods.go:89] "kube-scheduler-addons-642189" [26a48cb9-6a80-4c21-b965-a2dec20ca37d] Running
	I1017 18:57:48.073167  497052 system_pods.go:89] "metrics-server-85b7d694d7-7d6xn" [3877854d-d5e2-4181-ba78-988a54712111] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:57:48.073176  497052 system_pods.go:89] "nvidia-device-plugin-daemonset-5272k" [f201ab4f-abad-46f2-a109-95004c7250f7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 18:57:48.073181  497052 system_pods.go:89] "registry-6b586f9694-gfg4q" [f3780320-4513-4f0c-a613-2e6dae9f1050] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 18:57:48.073190  497052 system_pods.go:89] "registry-creds-764b6fb674-wpqx2" [ff764293-9993-42e2-aed2-de34ffce5c63] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 18:57:48.073196  497052 system_pods.go:89] "registry-proxy-7wchq" [ba24cd6f-ac09-4d7a-8504-fc72367cd2c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 18:57:48.073201  497052 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qxcgb" [907c8bda-b107-4358-b274-36307a0e95d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:57:48.073208  497052 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x4f9r" [8bad8697-4458-4007-beb2-6ee425032923] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:57:48.073213  497052 system_pods.go:89] "storage-provisioner" [6b2b7583-da33-4e05-bf2a-75ac8e369265] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 18:57:48.073230  497052 retry.go:31] will retry after 463.707569ms: missing components: kube-dns
	I1017 18:57:48.080096  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:48.215550  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:48.524793  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:48.525000  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:48.544223  497052 system_pods.go:86] 20 kube-system pods found
	I1017 18:57:48.544273  497052 system_pods.go:89] "amd-gpu-device-plugin-t48xm" [3156d3f4-4196-443e-86ea-eb10fdc988bc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1017 18:57:48.544283  497052 system_pods.go:89] "coredns-66bc5c9577-9qzb6" [fac124c4-9636-4867-b8d6-b85ace3157be] Running
	I1017 18:57:48.544304  497052 system_pods.go:89] "csi-hostpath-attacher-0" [ff1154c7-8dcf-4784-aeb0-4b7f71b610d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 18:57:48.544320  497052 system_pods.go:89] "csi-hostpath-resizer-0" [5848a585-6545-4769-aef8-eece82ad7a3e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 18:57:48.544338  497052 system_pods.go:89] "csi-hostpathplugin-5kdtq" [51ff254c-6eca-4206-bc0d-d45c02ee3e01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 18:57:48.544345  497052 system_pods.go:89] "etcd-addons-642189" [19dd00f5-11cf-4bcb-8d15-81fdee0122ac] Running
	I1017 18:57:48.544356  497052 system_pods.go:89] "kindnet-6gk89" [fa4d48ce-32f6-4a29-a643-adf89425fb2d] Running
	I1017 18:57:48.544363  497052 system_pods.go:89] "kube-apiserver-addons-642189" [1416f756-9377-46ae-8c1e-89cad4fc1c3d] Running
	I1017 18:57:48.544369  497052 system_pods.go:89] "kube-controller-manager-addons-642189" [8db3ab0c-4f17-48cc-9e53-5522c8f070d5] Running
	I1017 18:57:48.544382  497052 system_pods.go:89] "kube-ingress-dns-minikube" [f8388279-4ec9-4e98-9cd9-b8d496b5d57a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 18:57:48.544388  497052 system_pods.go:89] "kube-proxy-n4pk6" [72dac253-09fc-4aa9-aed7-196eed4d49e7] Running
	I1017 18:57:48.544395  497052 system_pods.go:89] "kube-scheduler-addons-642189" [26a48cb9-6a80-4c21-b965-a2dec20ca37d] Running
	I1017 18:57:48.544403  497052 system_pods.go:89] "metrics-server-85b7d694d7-7d6xn" [3877854d-d5e2-4181-ba78-988a54712111] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:57:48.544417  497052 system_pods.go:89] "nvidia-device-plugin-daemonset-5272k" [f201ab4f-abad-46f2-a109-95004c7250f7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 18:57:48.544427  497052 system_pods.go:89] "registry-6b586f9694-gfg4q" [f3780320-4513-4f0c-a613-2e6dae9f1050] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 18:57:48.544441  497052 system_pods.go:89] "registry-creds-764b6fb674-wpqx2" [ff764293-9993-42e2-aed2-de34ffce5c63] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 18:57:48.544456  497052 system_pods.go:89] "registry-proxy-7wchq" [ba24cd6f-ac09-4d7a-8504-fc72367cd2c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 18:57:48.544477  497052 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qxcgb" [907c8bda-b107-4358-b274-36307a0e95d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:57:48.544490  497052 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x4f9r" [8bad8697-4458-4007-beb2-6ee425032923] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:57:48.544497  497052 system_pods.go:89] "storage-provisioner" [6b2b7583-da33-4e05-bf2a-75ac8e369265] Running
	I1017 18:57:48.544510  497052 system_pods.go:126] duration metric: took 1.007247909s to wait for k8s-apps to be running ...
	I1017 18:57:48.544525  497052 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 18:57:48.544594  497052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 18:57:48.560799  497052 system_svc.go:56] duration metric: took 16.260831ms WaitForService to wait for kubelet
	I1017 18:57:48.560848  497052 kubeadm.go:586] duration metric: took 41.687432721s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 18:57:48.560887  497052 node_conditions.go:102] verifying NodePressure condition ...
	I1017 18:57:48.564184  497052 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 18:57:48.564213  497052 node_conditions.go:123] node cpu capacity is 8
	I1017 18:57:48.564228  497052 node_conditions.go:105] duration metric: took 3.337392ms to run NodePressure ...
	I1017 18:57:48.564242  497052 start.go:241] waiting for startup goroutines ...
	I1017 18:57:48.580734  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:48.715507  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:49.024318  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:49.024668  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:49.081142  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:49.216159  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:49.524364  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:49.524391  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:49.580811  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:49.715632  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:50.024612  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:50.024620  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:50.081464  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:50.215831  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:50.524215  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:50.524445  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:50.581147  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:50.715396  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:51.024071  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:51.024313  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:51.081196  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:51.215324  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:51.524319  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:51.524452  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:51.580623  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:51.714944  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:52.024668  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:52.024807  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:52.080957  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:52.215939  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:52.524742  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:52.524864  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:52.625023  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:52.715548  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:53.023855  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:53.024642  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:53.081152  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:53.216236  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:53.524260  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:53.524565  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:53.581097  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:53.715049  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:54.025141  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:54.025408  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:54.080570  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:54.215377  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:54.523636  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:54.524141  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:54.580317  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:54.715090  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:55.024006  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:55.024162  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:55.080774  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:55.216564  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:55.525058  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:55.525089  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:55.580313  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:55.714648  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:56.086941  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:56.087382  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:56.087559  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:56.215588  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:56.524428  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:56.524453  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:56.580309  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:56.714854  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:57.024303  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:57.024350  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:57.081317  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:57.215502  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:57.524142  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:57.524233  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:57.580869  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:57.715740  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:58.024382  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:58.024539  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:58.080946  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:58.216425  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:58.524609  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:58.524630  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:58.624973  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:58.626990  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:58.725716  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:59.023662  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:59.024322  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:59.080795  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:57:59.199489  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:59.199521  497052 retry.go:31] will retry after 25.11020404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:59.215207  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:59.526666  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:59.526867  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:59.627142  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:59.715564  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:00.024935  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:00.025115  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:00.081298  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:00.216454  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:00.524303  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:00.524339  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:00.581263  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:00.715417  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:01.024280  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:01.024599  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:01.081198  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:01.215639  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:01.524068  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:01.524430  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:01.581138  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:01.714646  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:02.024425  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:02.024537  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:02.081428  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:02.216144  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:02.524029  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:02.524050  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:02.580648  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:02.715608  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:03.025042  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:03.025065  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:03.081490  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:03.215752  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:03.524103  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:03.524318  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:03.581819  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:03.715616  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:04.024871  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:04.024924  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:04.081122  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:04.216889  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:04.524591  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:04.524604  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:04.581069  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:04.715472  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:05.024597  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:05.024637  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:05.081066  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:05.215075  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:05.523902  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:05.523934  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:05.581295  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:05.716290  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:06.024179  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:06.024467  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:06.080523  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:06.215971  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:06.524356  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:06.524588  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:06.581481  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:06.714843  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:07.024176  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:07.024231  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:07.080960  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:07.216581  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:07.524002  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:07.524170  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:07.580602  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:07.715066  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:08.023806  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:08.024461  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:08.080731  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:08.216258  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:08.636458  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:08.636596  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:08.636658  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:08.738711  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:09.023868  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:09.023883  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:09.080963  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:09.216107  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:09.525073  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:09.525131  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:09.581666  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:09.716459  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:10.027416  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:10.028729  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:10.085163  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:10.215722  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:10.525679  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:10.525743  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:10.582132  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:10.715056  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:11.024390  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:11.024397  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:11.081037  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:11.215790  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:11.524787  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:11.527505  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:11.581459  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:11.715345  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:12.023968  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:12.024024  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:12.081593  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:12.216159  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:12.524501  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:12.524859  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:12.581381  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:12.715192  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:13.024163  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:13.024354  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:13.080943  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:13.216277  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:13.524056  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:13.524267  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:13.580654  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:13.715865  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:14.024630  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:14.024677  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:14.081326  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:14.216132  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:14.524283  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:14.524447  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:14.581268  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:14.714888  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:15.024379  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:15.024435  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:15.080975  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:15.215395  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:15.535661  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:15.535709  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:15.602959  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:15.732588  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:16.024147  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:16.024274  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:16.080876  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:16.215890  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:16.523508  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:16.523575  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:16.580950  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:16.715473  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:17.024150  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:17.024384  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:17.081049  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:17.216306  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:17.523481  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:17.524015  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:17.580010  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:17.714626  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:18.024187  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:18.024242  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:18.080859  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:18.216442  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:18.523705  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:18.523802  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:18.581396  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:18.714969  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:19.024763  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:19.024774  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:19.081092  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:19.215232  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:19.524666  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:19.525441  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:19.625362  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:19.715298  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:20.025401  497052 kapi.go:107] duration metric: took 1m11.504323467s to wait for kubernetes.io/minikube-addons=registry ...
	I1017 18:58:20.025480  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:20.081187  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:20.216271  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:20.523714  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:20.628248  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:20.717128  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:21.025181  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:21.080494  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:21.215705  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:21.524658  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:21.580861  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:21.715841  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:22.024816  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:22.082581  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:22.216137  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:22.523798  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:22.580346  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:22.714773  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:23.024331  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:23.080872  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:23.216662  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:23.525201  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:23.580758  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:23.715856  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:24.029054  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:24.082235  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:24.217536  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:24.311024  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:58:24.525578  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:24.588801  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:24.731261  497052 kapi.go:107] duration metric: took 1m9.519331114s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1017 18:58:24.798993  497052 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-642189 cluster.
	I1017 18:58:24.822247  497052 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1017 18:58:24.844564  497052 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1017 18:58:25.025630  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:25.082643  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:25.157473  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:25.157512  497052 retry.go:31] will retry after 41.701288149s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:25.524242  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:25.581005  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:26.024837  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:26.081256  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:26.523996  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:26.581143  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:27.024258  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:27.081001  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:27.523880  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:27.581757  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:28.023725  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:28.081352  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:28.524577  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:28.596367  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:29.024420  497052 kapi.go:107] duration metric: took 1m20.50428445s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1017 18:58:29.080662  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:29.581229  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:30.081310  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:30.581187  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:31.081450  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:31.581312  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:32.080918  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:32.581210  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:33.080838  497052 kapi.go:107] duration metric: took 1m24.004007789s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1017 18:59:06.862175  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1017 18:59:07.424178  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 18:59:07.424316  497052 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1017 18:59:07.426510  497052 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, cloud-spanner, ingress-dns, default-storageclass, storage-provisioner, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1017 18:59:07.427821  497052 addons.go:514] duration metric: took 2m0.554363307s for enable addons: enabled=[registry-creds amd-gpu-device-plugin cloud-spanner ingress-dns default-storageclass storage-provisioner nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1017 18:59:07.427878  497052 start.go:246] waiting for cluster config update ...
	I1017 18:59:07.427905  497052 start.go:255] writing updated cluster config ...
	I1017 18:59:07.428260  497052 ssh_runner.go:195] Run: rm -f paused
	I1017 18:59:07.432549  497052 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 18:59:07.436954  497052 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9qzb6" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:07.441754  497052 pod_ready.go:94] pod "coredns-66bc5c9577-9qzb6" is "Ready"
	I1017 18:59:07.441785  497052 pod_ready.go:86] duration metric: took 4.804584ms for pod "coredns-66bc5c9577-9qzb6" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:07.443790  497052 pod_ready.go:83] waiting for pod "etcd-addons-642189" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:07.447807  497052 pod_ready.go:94] pod "etcd-addons-642189" is "Ready"
	I1017 18:59:07.447829  497052 pod_ready.go:86] duration metric: took 4.018226ms for pod "etcd-addons-642189" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:07.449735  497052 pod_ready.go:83] waiting for pod "kube-apiserver-addons-642189" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:07.453604  497052 pod_ready.go:94] pod "kube-apiserver-addons-642189" is "Ready"
	I1017 18:59:07.453626  497052 pod_ready.go:86] duration metric: took 3.871056ms for pod "kube-apiserver-addons-642189" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:07.455589  497052 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-642189" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:07.836538  497052 pod_ready.go:94] pod "kube-controller-manager-addons-642189" is "Ready"
	I1017 18:59:07.836568  497052 pod_ready.go:86] duration metric: took 380.960631ms for pod "kube-controller-manager-addons-642189" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:08.036829  497052 pod_ready.go:83] waiting for pod "kube-proxy-n4pk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:08.437708  497052 pod_ready.go:94] pod "kube-proxy-n4pk6" is "Ready"
	I1017 18:59:08.437739  497052 pod_ready.go:86] duration metric: took 400.882008ms for pod "kube-proxy-n4pk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:08.637645  497052 pod_ready.go:83] waiting for pod "kube-scheduler-addons-642189" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:09.037213  497052 pod_ready.go:94] pod "kube-scheduler-addons-642189" is "Ready"
	I1017 18:59:09.037242  497052 pod_ready.go:86] duration metric: took 399.569767ms for pod "kube-scheduler-addons-642189" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:09.037254  497052 pod_ready.go:40] duration metric: took 1.604669397s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 18:59:09.085722  497052 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 18:59:09.087586  497052 out.go:179] * Done! kubectl is now configured to use "addons-642189" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 19:00:05 addons-642189 crio[766]: time="2025-10-17T19:00:05.598317444Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-wpqx2/registry-creds" id=f4bcfcc3-5a1d-4cbe-9a2d-bd8d2de091bb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:00:05 addons-642189 crio[766]: time="2025-10-17T19:00:05.599159293Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:00:05 addons-642189 crio[766]: time="2025-10-17T19:00:05.604723526Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:00:05 addons-642189 crio[766]: time="2025-10-17T19:00:05.605211901Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:00:05 addons-642189 crio[766]: time="2025-10-17T19:00:05.641415081Z" level=info msg="Created container d530a9f1a8aa91a88d8e279ebc8dd0f9aca84b78e9f83cbcf95a9dbe15a23283: kube-system/registry-creds-764b6fb674-wpqx2/registry-creds" id=f4bcfcc3-5a1d-4cbe-9a2d-bd8d2de091bb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:00:05 addons-642189 crio[766]: time="2025-10-17T19:00:05.642106171Z" level=info msg="Starting container: d530a9f1a8aa91a88d8e279ebc8dd0f9aca84b78e9f83cbcf95a9dbe15a23283" id=82e35c41-7f29-49b0-89e8-7e1d6cac7a89 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:00:05 addons-642189 crio[766]: time="2025-10-17T19:00:05.643847653Z" level=info msg="Started container" PID=9114 containerID=d530a9f1a8aa91a88d8e279ebc8dd0f9aca84b78e9f83cbcf95a9dbe15a23283 description=kube-system/registry-creds-764b6fb674-wpqx2/registry-creds id=82e35c41-7f29-49b0-89e8-7e1d6cac7a89 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e07a53b9bb4bde9a979158a00a4799dd3463f89d312254a4909fd00b079ca4d9
	Oct 17 19:01:01 addons-642189 crio[766]: time="2025-10-17T19:01:01.592373384Z" level=info msg="Stopping pod sandbox: 6dc30d738d40baf9e9afb4bcbd7f5c57105aa6d80ceb4f9a496973fdae0ef3c6" id=e13654c0-91cb-4020-82c3-b3408c0783b7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 17 19:01:01 addons-642189 crio[766]: time="2025-10-17T19:01:01.592440721Z" level=info msg="Stopped pod sandbox (already stopped): 6dc30d738d40baf9e9afb4bcbd7f5c57105aa6d80ceb4f9a496973fdae0ef3c6" id=e13654c0-91cb-4020-82c3-b3408c0783b7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 17 19:01:01 addons-642189 crio[766]: time="2025-10-17T19:01:01.592801569Z" level=info msg="Removing pod sandbox: 6dc30d738d40baf9e9afb4bcbd7f5c57105aa6d80ceb4f9a496973fdae0ef3c6" id=b5c46908-7908-44fe-8988-c86019019b7a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 17 19:01:01 addons-642189 crio[766]: time="2025-10-17T19:01:01.596795831Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 19:01:01 addons-642189 crio[766]: time="2025-10-17T19:01:01.596860649Z" level=info msg="Removed pod sandbox: 6dc30d738d40baf9e9afb4bcbd7f5c57105aa6d80ceb4f9a496973fdae0ef3c6" id=b5c46908-7908-44fe-8988-c86019019b7a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 17 19:01:58 addons-642189 crio[766]: time="2025-10-17T19:01:58.22862854Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-pbsvg/POD" id=48947eb7-ab2d-4619-a885-081227ef0380 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:01:58 addons-642189 crio[766]: time="2025-10-17T19:01:58.228802536Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:01:58 addons-642189 crio[766]: time="2025-10-17T19:01:58.235419728Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-pbsvg Namespace:default ID:462b2f54f40c6ededd5f65a030f0b2cde6ee9384bd7d7473e46fc39c7b81b356 UID:876857ff-b54d-4bbe-b34a-657920e8c37f NetNS:/var/run/netns/0cefbd39-830a-46af-bd6b-6e1b23c037e5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000630bf0}] Aliases:map[]}"
	Oct 17 19:01:58 addons-642189 crio[766]: time="2025-10-17T19:01:58.235465278Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-pbsvg to CNI network \"kindnet\" (type=ptp)"
	Oct 17 19:01:58 addons-642189 crio[766]: time="2025-10-17T19:01:58.246834214Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-pbsvg Namespace:default ID:462b2f54f40c6ededd5f65a030f0b2cde6ee9384bd7d7473e46fc39c7b81b356 UID:876857ff-b54d-4bbe-b34a-657920e8c37f NetNS:/var/run/netns/0cefbd39-830a-46af-bd6b-6e1b23c037e5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000630bf0}] Aliases:map[]}"
	Oct 17 19:01:58 addons-642189 crio[766]: time="2025-10-17T19:01:58.247017284Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-pbsvg for CNI network kindnet (type=ptp)"
	Oct 17 19:01:58 addons-642189 crio[766]: time="2025-10-17T19:01:58.247989067Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 19:01:58 addons-642189 crio[766]: time="2025-10-17T19:01:58.248955792Z" level=info msg="Ran pod sandbox 462b2f54f40c6ededd5f65a030f0b2cde6ee9384bd7d7473e46fc39c7b81b356 with infra container: default/hello-world-app-5d498dc89-pbsvg/POD" id=48947eb7-ab2d-4619-a885-081227ef0380 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:01:58 addons-642189 crio[766]: time="2025-10-17T19:01:58.250427774Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=a8cfb480-e41b-4094-80f9-85b8d39e2829 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:01:58 addons-642189 crio[766]: time="2025-10-17T19:01:58.250545647Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=a8cfb480-e41b-4094-80f9-85b8d39e2829 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:01:58 addons-642189 crio[766]: time="2025-10-17T19:01:58.250583092Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=a8cfb480-e41b-4094-80f9-85b8d39e2829 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:01:58 addons-642189 crio[766]: time="2025-10-17T19:01:58.251382759Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=3b71d7ae-47bf-4d23-8429-cf8690d598bf name=/runtime.v1.ImageService/PullImage
	Oct 17 19:01:58 addons-642189 crio[766]: time="2025-10-17T19:01:58.269042385Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	d530a9f1a8aa9       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   e07a53b9bb4bd       registry-creds-764b6fb674-wpqx2             kube-system
	00d99551d7c0c       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                                              2 minutes ago        Running             nginx                                    0                   a5750d3fa2cda       nginx                                       default
	05c40821c0ea1       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   0f6149e8c6f6a       busybox                                     default
	621b748d53884       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          3 minutes ago        Running             csi-snapshotter                          0                   af13f6543313d       csi-hostpathplugin-5kdtq                    kube-system
	317712e1d5627       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago        Running             csi-provisioner                          0                   af13f6543313d       csi-hostpathplugin-5kdtq                    kube-system
	c8951bd4e7631       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago        Running             liveness-probe                           0                   af13f6543313d       csi-hostpathplugin-5kdtq                    kube-system
	6073132bac88b       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago        Running             hostpath                                 0                   af13f6543313d       csi-hostpathplugin-5kdtq                    kube-system
	655687219dc3a       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             3 minutes ago        Running             controller                               0                   e2ab0f3f62d10       ingress-nginx-controller-675c5ddd98-m2d8d   ingress-nginx
	b4ac0698e398e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago        Running             gcp-auth                                 0                   770e528328818       gcp-auth-78565c9fb4-qz4xs                   gcp-auth
	d3882b8636526       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago        Running             node-driver-registrar                    0                   af13f6543313d       csi-hostpathplugin-5kdtq                    kube-system
	c9c4e61a00241       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            3 minutes ago        Running             gadget                                   0                   c0d1f662108bb       gadget-862fn                                gadget
	99fe19979e6f7       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   809255c8d95fe       registry-proxy-7wchq                        kube-system
	600ce5e0b6a85       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   af13f6543313d       csi-hostpathplugin-5kdtq                    kube-system
	da14c2626c054       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             3 minutes ago        Exited              patch                                    2                   d995aaab5f207       ingress-nginx-admission-patch-bm6p2         ingress-nginx
	214596c066d6e       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   ac43d3b720928       nvidia-device-plugin-daemonset-5272k        kube-system
	cd49fb8b1ee5c       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   312361186cc37       csi-hostpath-resizer-0                      kube-system
	b3f4b36a5cb43       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   3e25684aa0e80       snapshot-controller-7d9fbc56b8-x4f9r        kube-system
	fc47a341f594c       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   c3b6c1f87ffa0       amd-gpu-device-plugin-t48xm                 kube-system
	dc706332dbb69       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago        Exited              create                                   0                   e75136860bf4b       ingress-nginx-admission-create-xlhk6        ingress-nginx
	d3140eef7e893       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   7da1353dd02e9       snapshot-controller-7d9fbc56b8-qxcgb        kube-system
	bce8d27694469       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   92e6db4838030       csi-hostpath-attacher-0                     kube-system
	26a77f9d8fd20       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   6ac10e16c389d       kube-ingress-dns-minikube                   kube-system
	afa9f6b049681       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           4 minutes ago        Running             registry                                 0                   b79875ef65108       registry-6b586f9694-gfg4q                   kube-system
	33c6465e1a0d9       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              4 minutes ago        Running             yakd                                     0                   b65a84b02008a       yakd-dashboard-5ff678cb9-76bx8              yakd-dashboard
	b6fecbd31e3b0       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             4 minutes ago        Running             local-path-provisioner                   0                   5d92653513a48       local-path-provisioner-648f6765c9-7cp9v     local-path-storage
	ea8c7aa6a69f9       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        4 minutes ago        Running             metrics-server                           0                   a38ba07469845       metrics-server-85b7d694d7-7d6xn             kube-system
	7f62e9677624b       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               4 minutes ago        Running             cloud-spanner-emulator                   0                   3f67b44522228       cloud-spanner-emulator-86bd5cbb97-fbjhl     default
	05b0d75fa7e33       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             4 minutes ago        Running             storage-provisioner                      0                   d3ca4b2a3eaa6       storage-provisioner                         kube-system
	c8959e94a4c12       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             4 minutes ago        Running             coredns                                  0                   0067f27233069       coredns-66bc5c9577-9qzb6                    kube-system
	d6a7317aabf4d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago        Running             kindnet-cni                              0                   7290efc14442b       kindnet-6gk89                               kube-system
	49aea2d7818a2       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago        Running             kube-proxy                               0                   94d580da7a351       kube-proxy-n4pk6                            kube-system
	43e40655463cf       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             5 minutes ago        Running             kube-scheduler                           0                   55fed3d15ddf2       kube-scheduler-addons-642189                kube-system
	8b60fdbdcbbd6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             5 minutes ago        Running             etcd                                     0                   79c4f5c85b94c       etcd-addons-642189                          kube-system
	44a3d62e9e439       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             5 minutes ago        Running             kube-controller-manager                  0                   2cb9ab7dc6af7       kube-controller-manager-addons-642189       kube-system
	a76bbc48e30da       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             5 minutes ago        Running             kube-apiserver                           0                   b1c6a14f84229       kube-apiserver-addons-642189                kube-system
	
	
	==> coredns [c8959e94a4c121db6d2c59fccf2f1725ca1521aca59330c8262847404ff4a854] <==
	[INFO] 10.244.0.21:40884 - 8865 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.007036755s
	[INFO] 10.244.0.21:55124 - 16399 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004380194s
	[INFO] 10.244.0.21:46667 - 54522 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004788681s
	[INFO] 10.244.0.21:41007 - 44985 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00427368s
	[INFO] 10.244.0.21:55813 - 25189 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006085568s
	[INFO] 10.244.0.21:59010 - 58996 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001063275s
	[INFO] 10.244.0.21:34140 - 24710 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002341588s
	[INFO] 10.244.0.25:34017 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000241325s
	[INFO] 10.244.0.25:50804 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000151345s
	[INFO] 10.244.0.31:47867 - 26992 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.00021103s
	[INFO] 10.244.0.31:46355 - 40137 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000280838s
	[INFO] 10.244.0.31:35867 - 32520 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000117139s
	[INFO] 10.244.0.31:60214 - 62165 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.00019555s
	[INFO] 10.244.0.31:58152 - 57290 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000113198s
	[INFO] 10.244.0.31:50599 - 65250 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000173022s
	[INFO] 10.244.0.31:42864 - 8328 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003454407s
	[INFO] 10.244.0.31:33693 - 45296 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.006190155s
	[INFO] 10.244.0.31:44680 - 10614 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.005159723s
	[INFO] 10.244.0.31:50290 - 61873 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.010181057s
	[INFO] 10.244.0.31:49660 - 53107 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.00482518s
	[INFO] 10.244.0.31:36506 - 42631 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.006930039s
	[INFO] 10.244.0.31:41142 - 63096 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004489039s
	[INFO] 10.244.0.31:42143 - 25602 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005332754s
	[INFO] 10.244.0.31:43929 - 52940 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001984596s
	[INFO] 10.244.0.31:42253 - 49690 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.002067952s
	
	
	==> describe nodes <==
	Name:               addons-642189
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-642189
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=addons-642189
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T18_57_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-642189
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-642189"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 18:56:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-642189
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:01:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:01:37 +0000   Fri, 17 Oct 2025 18:56:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:01:37 +0000   Fri, 17 Oct 2025 18:56:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:01:37 +0000   Fri, 17 Oct 2025 18:56:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:01:37 +0000   Fri, 17 Oct 2025 18:57:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-642189
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                bdcb748b-3e8d-4cb8-92a6-69cb543c2625
	  Boot ID:                    c8616e78-d085-41cd-a329-f2bcfd9cfa15
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m50s
	  default                     cloud-spanner-emulator-86bd5cbb97-fbjhl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  default                     hello-world-app-5d498dc89-pbsvg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gadget                      gadget-862fn                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  gcp-auth                    gcp-auth-78565c9fb4-qz4xs                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-m2d8d    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m51s
	  kube-system                 amd-gpu-device-plugin-t48xm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 coredns-66bc5c9577-9qzb6                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m53s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 csi-hostpathplugin-5kdtq                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 etcd-addons-642189                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m58s
	  kube-system                 kindnet-6gk89                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m53s
	  kube-system                 kube-apiserver-addons-642189                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-controller-manager-addons-642189        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 kube-proxy-n4pk6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 kube-scheduler-addons-642189                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 metrics-server-85b7d694d7-7d6xn              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m51s
	  kube-system                 nvidia-device-plugin-daemonset-5272k         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 registry-6b586f9694-gfg4q                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 registry-creds-764b6fb674-wpqx2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 registry-proxy-7wchq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 snapshot-controller-7d9fbc56b8-qxcgb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 snapshot-controller-7d9fbc56b8-x4f9r         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  local-path-storage          local-path-provisioner-648f6765c9-7cp9v      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-76bx8               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m52s  kube-proxy       
	  Normal  Starting                 4m58s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m58s  kubelet          Node addons-642189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m58s  kubelet          Node addons-642189 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m58s  kubelet          Node addons-642189 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m54s  node-controller  Node addons-642189 event: Registered Node addons-642189 in Controller
	  Normal  NodeReady                4m12s  kubelet          Node addons-642189 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 d1 49 91 03 c2 08 06
	[  +0.000804] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 a9 2b 44 da ae 08 06
	[Oct17 18:59] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022229] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023876] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.024898] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +2.047801] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +4.031525] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:00] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +16.382262] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +32.252567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	
	
	==> etcd [8b60fdbdcbbd68792cea1184b624381c87a1f1eed5a416aa91d0007baad72c0d] <==
	{"level":"warn","ts":"2025-10-17T18:56:58.093476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:56:58.100531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:56:58.106750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:56:58.113186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:56:58.119549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:56:58.126275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:56:58.133362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:56:58.143251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:56:58.149623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:56:58.156016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:56:58.208425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:09.583634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:09.605057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:35.635447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:35.642496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:35.665185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39598","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T18:57:56.085532Z","caller":"traceutil/trace.go:172","msg":"trace[120260436] transaction","detail":"{read_only:false; response_revision:982; number_of_response:1; }","duration":"115.537819ms","start":"2025-10-17T18:57:55.969972Z","end":"2025-10-17T18:57:56.085510Z","steps":["trace[120260436] 'process raft request'  (duration: 115.315749ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T18:58:08.633548Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.724638ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gcp-auth/gcp-auth-certs-patch-rdzx2.186f5c52347a130d\" limit:1 ","response":"range_response_count:1 size:841"}
	{"level":"info","ts":"2025-10-17T18:58:08.633661Z","caller":"traceutil/trace.go:172","msg":"trace[677186799] range","detail":"{range_begin:/registry/events/gcp-auth/gcp-auth-certs-patch-rdzx2.186f5c52347a130d; range_end:; response_count:1; response_revision:1070; }","duration":"123.867582ms","start":"2025-10-17T18:58:08.509770Z","end":"2025-10-17T18:58:08.633638Z","steps":["trace[677186799] 'agreement among raft nodes before linearized reading'  (duration: 87.790869ms)","trace[677186799] 'range keys from in-memory index tree'  (duration: 35.826097ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T18:58:08.633699Z","caller":"traceutil/trace.go:172","msg":"trace[673993011] transaction","detail":"{read_only:false; response_revision:1072; number_of_response:1; }","duration":"127.047611ms","start":"2025-10-17T18:58:08.506621Z","end":"2025-10-17T18:58:08.633668Z","steps":["trace[673993011] 'process raft request'  (duration: 126.95624ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T18:58:08.633704Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.29337ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T18:58:08.633755Z","caller":"traceutil/trace.go:172","msg":"trace[391353548] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1072; }","duration":"111.383687ms","start":"2025-10-17T18:58:08.522364Z","end":"2025-10-17T18:58:08.633748Z","steps":["trace[391353548] 'agreement among raft nodes before linearized reading'  (duration: 111.276414ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T18:58:08.633799Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.662525ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T18:58:08.633742Z","caller":"traceutil/trace.go:172","msg":"trace[891336695] transaction","detail":"{read_only:false; response_revision:1071; number_of_response:1; }","duration":"156.790331ms","start":"2025-10-17T18:58:08.476930Z","end":"2025-10-17T18:58:08.633720Z","steps":["trace[891336695] 'process raft request'  (duration: 120.584837ms)","trace[891336695] 'compare'  (duration: 35.926156ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T18:58:08.633826Z","caller":"traceutil/trace.go:172","msg":"trace[1379347943] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1072; }","duration":"110.693619ms","start":"2025-10-17T18:58:08.523126Z","end":"2025-10-17T18:58:08.633820Z","steps":["trace[1379347943] 'agreement among raft nodes before linearized reading'  (duration: 110.643318ms)"],"step_count":1}
	
	
	==> gcp-auth [b4ac0698e398ee3ec3bf7468238bcff34349540a931e90303960359cdb3c9e91] <==
	2025/10/17 18:58:23 GCP Auth Webhook started!
	2025/10/17 18:59:09 Ready to marshal response ...
	2025/10/17 18:59:09 Ready to write response ...
	2025/10/17 18:59:09 Ready to marshal response ...
	2025/10/17 18:59:09 Ready to write response ...
	2025/10/17 18:59:09 Ready to marshal response ...
	2025/10/17 18:59:09 Ready to write response ...
	2025/10/17 18:59:24 Ready to marshal response ...
	2025/10/17 18:59:24 Ready to write response ...
	2025/10/17 18:59:24 Ready to marshal response ...
	2025/10/17 18:59:24 Ready to write response ...
	2025/10/17 18:59:28 Ready to marshal response ...
	2025/10/17 18:59:28 Ready to write response ...
	2025/10/17 18:59:31 Ready to marshal response ...
	2025/10/17 18:59:31 Ready to write response ...
	2025/10/17 18:59:34 Ready to marshal response ...
	2025/10/17 18:59:34 Ready to write response ...
	2025/10/17 18:59:42 Ready to marshal response ...
	2025/10/17 18:59:42 Ready to write response ...
	2025/10/17 18:59:56 Ready to marshal response ...
	2025/10/17 18:59:56 Ready to write response ...
	2025/10/17 19:01:57 Ready to marshal response ...
	2025/10/17 19:01:57 Ready to write response ...
	
	
	==> kernel <==
	 19:01:59 up  2:44,  0 user,  load average: 0.31, 0.60, 0.70
	Linux addons-642189 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d6a7317aabf4df8eb271b0bf784be0c6045d3ed3d186ebfc5869cb018026ecfd] <==
	I1017 18:59:57.272027       1 main.go:301] handling current node
	I1017 19:00:07.271205       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:00:07.271243       1 main.go:301] handling current node
	I1017 19:00:17.272846       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:00:17.272891       1 main.go:301] handling current node
	I1017 19:00:27.275964       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:00:27.276014       1 main.go:301] handling current node
	I1017 19:00:37.271668       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:00:37.271719       1 main.go:301] handling current node
	I1017 19:00:47.274063       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:00:47.274106       1 main.go:301] handling current node
	I1017 19:00:57.271731       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:00:57.271771       1 main.go:301] handling current node
	I1017 19:01:07.271573       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:01:07.271614       1 main.go:301] handling current node
	I1017 19:01:17.271636       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:01:17.271669       1 main.go:301] handling current node
	I1017 19:01:27.271443       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:01:27.271493       1 main.go:301] handling current node
	I1017 19:01:37.272234       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:01:37.272296       1 main.go:301] handling current node
	I1017 19:01:47.273550       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:01:47.273585       1 main.go:301] handling current node
	I1017 19:01:57.271377       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:01:57.271414       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a76bbc48e30da642f43c612cdc6a0a786d2a6d1c4942a22be68e5c4a9a6f40f9] <==
	W1017 18:57:35.658116       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1017 18:57:35.665149       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1017 18:57:47.435943       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.197.54:443: connect: connection refused
	E1017 18:57:47.435998       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.197.54:443: connect: connection refused" logger="UnhandledError"
	W1017 18:57:47.435964       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.197.54:443: connect: connection refused
	E1017 18:57:47.436064       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.197.54:443: connect: connection refused" logger="UnhandledError"
	W1017 18:57:47.459589       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.197.54:443: connect: connection refused
	E1017 18:57:47.459634       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.197.54:443: connect: connection refused" logger="UnhandledError"
	W1017 18:57:47.461401       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.197.54:443: connect: connection refused
	E1017 18:57:47.461438       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.197.54:443: connect: connection refused" logger="UnhandledError"
	E1017 18:57:53.448979       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.234.99:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.234.99:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.234.99:443: connect: connection refused" logger="UnhandledError"
	W1017 18:57:53.449252       1 handler_proxy.go:99] no RequestInfo found in the context
	E1017 18:57:53.449336       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1017 18:57:53.450116       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.234.99:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.234.99:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.234.99:443: connect: connection refused" logger="UnhandledError"
	E1017 18:57:53.455433       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.234.99:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.234.99:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.234.99:443: connect: connection refused" logger="UnhandledError"
	I1017 18:57:53.510561       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1017 18:59:17.759026       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54558: use of closed network connection
	E1017 18:59:17.919788       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54570: use of closed network connection
	I1017 18:59:31.595794       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1017 18:59:31.794049       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.85.172"}
	I1017 18:59:51.082239       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1017 19:01:57.995323       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.34.43"}
	
	
	==> kube-controller-manager [44a3d62e9e439ad0c55eef8ceec2ced7e9b2897150b415717801bf2686765caa] <==
	I1017 18:57:05.620987       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 18:57:05.620639       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 18:57:05.620716       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 18:57:05.621016       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 18:57:05.621096       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-642189"
	I1017 18:57:05.621012       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 18:57:05.620716       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 18:57:05.621159       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1017 18:57:05.621202       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 18:57:05.621318       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1017 18:57:05.621719       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 18:57:05.624443       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 18:57:05.624500       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 18:57:05.626984       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 18:57:05.633186       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 18:57:05.636439       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1017 18:57:08.196556       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1017 18:57:35.629222       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1017 18:57:35.629390       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1017 18:57:35.629445       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1017 18:57:35.645511       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1017 18:57:35.652568       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1017 18:57:35.730334       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 18:57:35.752820       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 18:57:50.628260       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [49aea2d7818a2bf7202542ba97e6f5d99fd7c496045a1c05fbd5332046a05e6f] <==
	I1017 18:57:06.846087       1 server_linux.go:53] "Using iptables proxy"
	I1017 18:57:06.942356       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 18:57:07.047726       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 18:57:07.047784       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 18:57:07.047889       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 18:57:07.178574       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 18:57:07.178782       1 server_linux.go:132] "Using iptables Proxier"
	I1017 18:57:07.194632       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 18:57:07.196450       1 server.go:527] "Version info" version="v1.34.1"
	I1017 18:57:07.196490       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 18:57:07.202121       1 config.go:200] "Starting service config controller"
	I1017 18:57:07.205933       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 18:57:07.202570       1 config.go:309] "Starting node config controller"
	I1017 18:57:07.205970       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 18:57:07.205976       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 18:57:07.202910       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 18:57:07.205984       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 18:57:07.202898       1 config.go:106] "Starting endpoint slice config controller"
	I1017 18:57:07.205995       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 18:57:07.307295       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 18:57:07.307367       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 18:57:07.319175       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [43e40655463cffe530b5aa16eb8ff13e3891f57f9034e26ef39cd927af2c8e4a] <==
	E1017 18:56:58.633747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 18:56:58.633666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 18:56:58.633479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 18:56:58.633635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 18:56:58.633892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 18:56:58.634008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 18:56:58.634053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 18:56:58.634062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 18:56:58.634134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 18:56:58.634132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 18:56:58.634249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 18:56:58.634288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 18:56:58.634334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 18:56:58.634341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 18:56:58.634341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 18:56:58.634947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 18:56:59.462129       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 18:56:59.480591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 18:56:59.535584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 18:56:59.554025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 18:56:59.604356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1017 18:56:59.693315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 18:56:59.698296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 18:56:59.787731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1017 18:57:01.831750       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:00:01 addons-642189 kubelet[1283]: I1017 19:00:01.548191    1283 scope.go:117] "RemoveContainer" containerID="555bca55548d53ce96f0d9836852f8c670fe7d5d4123148188f0528b9bd96c7c"
	Oct 17 19:00:01 addons-642189 kubelet[1283]: I1017 19:00:01.556158    1283 scope.go:117] "RemoveContainer" containerID="6cc1b7f78e9b21c0664e5ac78120a453e28c65ce11717d2f784bc7ddc398656b"
	Oct 17 19:00:03 addons-642189 kubelet[1283]: I1017 19:00:03.530605    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=6.923017972 podStartE2EDuration="7.530574682s" podCreationTimestamp="2025-10-17 18:59:56 +0000 UTC" firstStartedPulling="2025-10-17 18:59:56.79133755 +0000 UTC m=+175.664293926" lastFinishedPulling="2025-10-17 18:59:57.398894257 +0000 UTC m=+176.271850636" observedRunningTime="2025-10-17 18:59:58.028780642 +0000 UTC m=+176.901737039" watchObservedRunningTime="2025-10-17 19:00:03.530574682 +0000 UTC m=+182.403531080"
	Oct 17 19:00:03 addons-642189 kubelet[1283]: I1017 19:00:03.832134    1283 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^78a0bf5c-ab8b-11f0-b541-9ecbd56f4b3c\") pod \"c502d1e4-f363-4862-9300-d5c4d58f0908\" (UID: \"c502d1e4-f363-4862-9300-d5c4d58f0908\") "
	Oct 17 19:00:03 addons-642189 kubelet[1283]: I1017 19:00:03.832187    1283 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c502d1e4-f363-4862-9300-d5c4d58f0908-gcp-creds\") pod \"c502d1e4-f363-4862-9300-d5c4d58f0908\" (UID: \"c502d1e4-f363-4862-9300-d5c4d58f0908\") "
	Oct 17 19:00:03 addons-642189 kubelet[1283]: I1017 19:00:03.832209    1283 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrbvt\" (UniqueName: \"kubernetes.io/projected/c502d1e4-f363-4862-9300-d5c4d58f0908-kube-api-access-zrbvt\") pod \"c502d1e4-f363-4862-9300-d5c4d58f0908\" (UID: \"c502d1e4-f363-4862-9300-d5c4d58f0908\") "
	Oct 17 19:00:03 addons-642189 kubelet[1283]: I1017 19:00:03.832308    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c502d1e4-f363-4862-9300-d5c4d58f0908-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "c502d1e4-f363-4862-9300-d5c4d58f0908" (UID: "c502d1e4-f363-4862-9300-d5c4d58f0908"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 17 19:00:03 addons-642189 kubelet[1283]: I1017 19:00:03.834600    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c502d1e4-f363-4862-9300-d5c4d58f0908-kube-api-access-zrbvt" (OuterVolumeSpecName: "kube-api-access-zrbvt") pod "c502d1e4-f363-4862-9300-d5c4d58f0908" (UID: "c502d1e4-f363-4862-9300-d5c4d58f0908"). InnerVolumeSpecName "kube-api-access-zrbvt". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 17 19:00:03 addons-642189 kubelet[1283]: I1017 19:00:03.835727    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^78a0bf5c-ab8b-11f0-b541-9ecbd56f4b3c" (OuterVolumeSpecName: "task-pv-storage") pod "c502d1e4-f363-4862-9300-d5c4d58f0908" (UID: "c502d1e4-f363-4862-9300-d5c4d58f0908"). InnerVolumeSpecName "pvc-dd44d06d-f84c-4a6a-b7e1-a30c3aba0e1f". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Oct 17 19:00:03 addons-642189 kubelet[1283]: I1017 19:00:03.933276    1283 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-dd44d06d-f84c-4a6a-b7e1-a30c3aba0e1f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^78a0bf5c-ab8b-11f0-b541-9ecbd56f4b3c\") on node \"addons-642189\" "
	Oct 17 19:00:03 addons-642189 kubelet[1283]: I1017 19:00:03.933312    1283 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c502d1e4-f363-4862-9300-d5c4d58f0908-gcp-creds\") on node \"addons-642189\" DevicePath \"\""
	Oct 17 19:00:03 addons-642189 kubelet[1283]: I1017 19:00:03.933324    1283 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zrbvt\" (UniqueName: \"kubernetes.io/projected/c502d1e4-f363-4862-9300-d5c4d58f0908-kube-api-access-zrbvt\") on node \"addons-642189\" DevicePath \"\""
	Oct 17 19:00:03 addons-642189 kubelet[1283]: I1017 19:00:03.937885    1283 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-dd44d06d-f84c-4a6a-b7e1-a30c3aba0e1f" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^78a0bf5c-ab8b-11f0-b541-9ecbd56f4b3c") on node "addons-642189"
	Oct 17 19:00:04 addons-642189 kubelet[1283]: I1017 19:00:04.027492    1283 scope.go:117] "RemoveContainer" containerID="db7e04d363684fb560f4cc95236d1ae7d59729aaafaaee89e804a9272a70c075"
	Oct 17 19:00:04 addons-642189 kubelet[1283]: I1017 19:00:04.033717    1283 reconciler_common.go:299] "Volume detached for volume \"pvc-dd44d06d-f84c-4a6a-b7e1-a30c3aba0e1f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^78a0bf5c-ab8b-11f0-b541-9ecbd56f4b3c\") on node \"addons-642189\" DevicePath \"\""
	Oct 17 19:00:04 addons-642189 kubelet[1283]: I1017 19:00:04.037090    1283 scope.go:117] "RemoveContainer" containerID="db7e04d363684fb560f4cc95236d1ae7d59729aaafaaee89e804a9272a70c075"
	Oct 17 19:00:04 addons-642189 kubelet[1283]: E1017 19:00:04.037595    1283 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db7e04d363684fb560f4cc95236d1ae7d59729aaafaaee89e804a9272a70c075\": container with ID starting with db7e04d363684fb560f4cc95236d1ae7d59729aaafaaee89e804a9272a70c075 not found: ID does not exist" containerID="db7e04d363684fb560f4cc95236d1ae7d59729aaafaaee89e804a9272a70c075"
	Oct 17 19:00:04 addons-642189 kubelet[1283]: I1017 19:00:04.037650    1283 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db7e04d363684fb560f4cc95236d1ae7d59729aaafaaee89e804a9272a70c075"} err="failed to get container status \"db7e04d363684fb560f4cc95236d1ae7d59729aaafaaee89e804a9272a70c075\": rpc error: code = NotFound desc = could not find container \"db7e04d363684fb560f4cc95236d1ae7d59729aaafaaee89e804a9272a70c075\": container with ID starting with db7e04d363684fb560f4cc95236d1ae7d59729aaafaaee89e804a9272a70c075 not found: ID does not exist"
	Oct 17 19:00:05 addons-642189 kubelet[1283]: I1017 19:00:05.217887    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c502d1e4-f363-4862-9300-d5c4d58f0908" path="/var/lib/kubelet/pods/c502d1e4-f363-4862-9300-d5c4d58f0908/volumes"
	Oct 17 19:00:06 addons-642189 kubelet[1283]: I1017 19:00:06.053543    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-wpqx2" podStartSLOduration=177.732543151 podStartE2EDuration="2m59.053517467s" podCreationTimestamp="2025-10-17 18:57:07 +0000 UTC" firstStartedPulling="2025-10-17 19:00:04.239794917 +0000 UTC m=+183.112751298" lastFinishedPulling="2025-10-17 19:00:05.560769238 +0000 UTC m=+184.433725614" observedRunningTime="2025-10-17 19:00:06.051953004 +0000 UTC m=+184.924909389" watchObservedRunningTime="2025-10-17 19:00:06.053517467 +0000 UTC m=+184.926473867"
	Oct 17 19:00:32 addons-642189 kubelet[1283]: I1017 19:00:32.214785    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-7wchq" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:00:41 addons-642189 kubelet[1283]: I1017 19:00:41.215255    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-t48xm" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:01:00 addons-642189 kubelet[1283]: I1017 19:01:00.214118    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5272k" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:01:57 addons-642189 kubelet[1283]: I1017 19:01:57.952626    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb5dj\" (UniqueName: \"kubernetes.io/projected/876857ff-b54d-4bbe-b34a-657920e8c37f-kube-api-access-lb5dj\") pod \"hello-world-app-5d498dc89-pbsvg\" (UID: \"876857ff-b54d-4bbe-b34a-657920e8c37f\") " pod="default/hello-world-app-5d498dc89-pbsvg"
	Oct 17 19:01:57 addons-642189 kubelet[1283]: I1017 19:01:57.952713    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/876857ff-b54d-4bbe-b34a-657920e8c37f-gcp-creds\") pod \"hello-world-app-5d498dc89-pbsvg\" (UID: \"876857ff-b54d-4bbe-b34a-657920e8c37f\") " pod="default/hello-world-app-5d498dc89-pbsvg"
	
	
	==> storage-provisioner [05b0d75fa7e337102c5b778d87f16ae508e704efb9367ba5a98cc93f0460d03c] <==
	W1017 19:01:35.104411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:37.107947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:37.113213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:39.117164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:39.122833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:41.126497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:41.131761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:43.135306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:43.139348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:45.143029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:45.147292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:47.150766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:47.156250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:49.159947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:49.164274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:51.167715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:51.172228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:53.175910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:53.180192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:55.183707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:55.188069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:57.191839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:57.196221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:59.199968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:59.204673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-642189 -n addons-642189
helpers_test.go:269: (dbg) Run:  kubectl --context addons-642189 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-pbsvg ingress-nginx-admission-create-xlhk6 ingress-nginx-admission-patch-bm6p2
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-642189 describe pod hello-world-app-5d498dc89-pbsvg ingress-nginx-admission-create-xlhk6 ingress-nginx-admission-patch-bm6p2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-642189 describe pod hello-world-app-5d498dc89-pbsvg ingress-nginx-admission-create-xlhk6 ingress-nginx-admission-patch-bm6p2: exit status 1 (70.693381ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-pbsvg
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-642189/192.168.49.2
	Start Time:       Fri, 17 Oct 2025 19:01:57 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lb5dj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lb5dj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-pbsvg to addons-642189
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     1s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 1.358s (1.358s including waiting). Image size: 4944818 bytes.
	  Normal  Created    1s    kubelet            Created container: hello-world-app
	  Normal  Started    1s    kubelet            Started container hello-world-app

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xlhk6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bm6p2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-642189 describe pod hello-world-app-5d498dc89-pbsvg ingress-nginx-admission-create-xlhk6 ingress-nginx-admission-patch-bm6p2: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-642189 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-642189 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (248.025971ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:02:00.560825  511961 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:02:00.561133  511961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:02:00.561143  511961 out.go:374] Setting ErrFile to fd 2...
	I1017 19:02:00.561148  511961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:02:00.561415  511961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:02:00.561820  511961 mustload.go:65] Loading cluster: addons-642189
	I1017 19:02:00.562225  511961 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:02:00.562245  511961 addons.go:606] checking whether the cluster is paused
	I1017 19:02:00.562352  511961 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:02:00.562373  511961 host.go:66] Checking if "addons-642189" exists ...
	I1017 19:02:00.562901  511961 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 19:02:00.581545  511961 ssh_runner.go:195] Run: systemctl --version
	I1017 19:02:00.581606  511961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 19:02:00.599723  511961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 19:02:00.697218  511961 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:02:00.697299  511961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:02:00.728631  511961 cri.go:89] found id: "d530a9f1a8aa91a88d8e279ebc8dd0f9aca84b78e9f83cbcf95a9dbe15a23283"
	I1017 19:02:00.728666  511961 cri.go:89] found id: "621b748d538846f79bca883df087ce87a58a6e5cc5dbbb8f2ae4845785e122d6"
	I1017 19:02:00.728670  511961 cri.go:89] found id: "317712e1d5627e3d52413fcacd6f0a3e40e74b682567b7117acb2ddbf4da2a72"
	I1017 19:02:00.728673  511961 cri.go:89] found id: "c8951bd4e7631f9e6fa9ad944251500dd44cda63d891f7b553931aa3ef22e7e7"
	I1017 19:02:00.728675  511961 cri.go:89] found id: "6073132bac88bd54a1f9014aa1b74b68de0ac557ac483e5c0a7ff51ac939a2dd"
	I1017 19:02:00.728692  511961 cri.go:89] found id: "d3882b8636526fe7f302d3351b33fc68a8df5109463693af6642869704a2b6a2"
	I1017 19:02:00.728697  511961 cri.go:89] found id: "99fe19979e6f782cb4ff1df09e72c9c58535540daf8c28b63b4a3f1719cfa365"
	I1017 19:02:00.728700  511961 cri.go:89] found id: "600ce5e0b6a8556aa7c055afc13692cb00b2d6f0a82ba6d0817e5e424b49881c"
	I1017 19:02:00.728705  511961 cri.go:89] found id: "214596c066d6ebce81c069cda2c2790ee022d3770221e7e183390decc49e626b"
	I1017 19:02:00.728713  511961 cri.go:89] found id: "cd49fb8b1ee5ca17b886edf352059ec13c3aa8fed46c8383c6660656f0403d67"
	I1017 19:02:00.728718  511961 cri.go:89] found id: "b3f4b36a5cb43ddf3de225e5f08ad4b2d165ae6234c165082ae1316a48f48425"
	I1017 19:02:00.728721  511961 cri.go:89] found id: "fc47a341f594c2a4203992ac73a7c89fb4722c54e399f0949c3604dfa81f70ef"
	I1017 19:02:00.728725  511961 cri.go:89] found id: "d3140eef7e893c98db6a57843c26dd0767733610bfaab45265577ad3a64a334e"
	I1017 19:02:00.728729  511961 cri.go:89] found id: "bce8d27694469a014387080e94f416e3cfb88071ea69506ea8a1d04b16176e43"
	I1017 19:02:00.728733  511961 cri.go:89] found id: "26a77f9d8fd20d481d8ec7b0d85a65954b10af33ae4994293044ad2067b41872"
	I1017 19:02:00.728740  511961 cri.go:89] found id: "afa9f6b0496818adf412ab2c3cf979e86f2593796860b7b9e53c8fd85f0fe586"
	I1017 19:02:00.728747  511961 cri.go:89] found id: "ea8c7aa6a69f9b8476c9c28a6ac0944597fdf78921727f3c142c41a2b6a9bb00"
	I1017 19:02:00.728754  511961 cri.go:89] found id: "05b0d75fa7e337102c5b778d87f16ae508e704efb9367ba5a98cc93f0460d03c"
	I1017 19:02:00.728758  511961 cri.go:89] found id: "c8959e94a4c121db6d2c59fccf2f1725ca1521aca59330c8262847404ff4a854"
	I1017 19:02:00.728760  511961 cri.go:89] found id: "d6a7317aabf4df8eb271b0bf784be0c6045d3ed3d186ebfc5869cb018026ecfd"
	I1017 19:02:00.728763  511961 cri.go:89] found id: "49aea2d7818a2bf7202542ba97e6f5d99fd7c496045a1c05fbd5332046a05e6f"
	I1017 19:02:00.728765  511961 cri.go:89] found id: "43e40655463cffe530b5aa16eb8ff13e3891f57f9034e26ef39cd927af2c8e4a"
	I1017 19:02:00.728767  511961 cri.go:89] found id: "8b60fdbdcbbd68792cea1184b624381c87a1f1eed5a416aa91d0007baad72c0d"
	I1017 19:02:00.728769  511961 cri.go:89] found id: "44a3d62e9e439ad0c55eef8ceec2ced7e9b2897150b415717801bf2686765caa"
	I1017 19:02:00.728771  511961 cri.go:89] found id: "a76bbc48e30da642f43c612cdc6a0a786d2a6d1c4942a22be68e5c4a9a6f40f9"
	I1017 19:02:00.728774  511961 cri.go:89] found id: ""
	I1017 19:02:00.728829  511961 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:02:00.744361  511961 out.go:203] 
	W1017 19:02:00.745725  511961 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:02:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:02:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:02:00.745756  511961 out.go:285] * 
	* 
	W1017 19:02:00.749937  511961 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:02:00.751336  511961 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-642189 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-642189 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-642189 addons disable ingress --alsologtostderr -v=1: exit status 11 (240.058391ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:02:00.802870  512029 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:02:00.803236  512029 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:02:00.803247  512029 out.go:374] Setting ErrFile to fd 2...
	I1017 19:02:00.803252  512029 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:02:00.803440  512029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:02:00.803754  512029 mustload.go:65] Loading cluster: addons-642189
	I1017 19:02:00.804134  512029 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:02:00.804151  512029 addons.go:606] checking whether the cluster is paused
	I1017 19:02:00.804227  512029 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:02:00.804239  512029 host.go:66] Checking if "addons-642189" exists ...
	I1017 19:02:00.804622  512029 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 19:02:00.823269  512029 ssh_runner.go:195] Run: systemctl --version
	I1017 19:02:00.823339  512029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 19:02:00.842234  512029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 19:02:00.938701  512029 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:02:00.938804  512029 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:02:00.969436  512029 cri.go:89] found id: "d530a9f1a8aa91a88d8e279ebc8dd0f9aca84b78e9f83cbcf95a9dbe15a23283"
	I1017 19:02:00.969468  512029 cri.go:89] found id: "621b748d538846f79bca883df087ce87a58a6e5cc5dbbb8f2ae4845785e122d6"
	I1017 19:02:00.969474  512029 cri.go:89] found id: "317712e1d5627e3d52413fcacd6f0a3e40e74b682567b7117acb2ddbf4da2a72"
	I1017 19:02:00.969479  512029 cri.go:89] found id: "c8951bd4e7631f9e6fa9ad944251500dd44cda63d891f7b553931aa3ef22e7e7"
	I1017 19:02:00.969483  512029 cri.go:89] found id: "6073132bac88bd54a1f9014aa1b74b68de0ac557ac483e5c0a7ff51ac939a2dd"
	I1017 19:02:00.969489  512029 cri.go:89] found id: "d3882b8636526fe7f302d3351b33fc68a8df5109463693af6642869704a2b6a2"
	I1017 19:02:00.969493  512029 cri.go:89] found id: "99fe19979e6f782cb4ff1df09e72c9c58535540daf8c28b63b4a3f1719cfa365"
	I1017 19:02:00.969496  512029 cri.go:89] found id: "600ce5e0b6a8556aa7c055afc13692cb00b2d6f0a82ba6d0817e5e424b49881c"
	I1017 19:02:00.969500  512029 cri.go:89] found id: "214596c066d6ebce81c069cda2c2790ee022d3770221e7e183390decc49e626b"
	I1017 19:02:00.969514  512029 cri.go:89] found id: "cd49fb8b1ee5ca17b886edf352059ec13c3aa8fed46c8383c6660656f0403d67"
	I1017 19:02:00.969518  512029 cri.go:89] found id: "b3f4b36a5cb43ddf3de225e5f08ad4b2d165ae6234c165082ae1316a48f48425"
	I1017 19:02:00.969522  512029 cri.go:89] found id: "fc47a341f594c2a4203992ac73a7c89fb4722c54e399f0949c3604dfa81f70ef"
	I1017 19:02:00.969525  512029 cri.go:89] found id: "d3140eef7e893c98db6a57843c26dd0767733610bfaab45265577ad3a64a334e"
	I1017 19:02:00.969530  512029 cri.go:89] found id: "bce8d27694469a014387080e94f416e3cfb88071ea69506ea8a1d04b16176e43"
	I1017 19:02:00.969534  512029 cri.go:89] found id: "26a77f9d8fd20d481d8ec7b0d85a65954b10af33ae4994293044ad2067b41872"
	I1017 19:02:00.969551  512029 cri.go:89] found id: "afa9f6b0496818adf412ab2c3cf979e86f2593796860b7b9e53c8fd85f0fe586"
	I1017 19:02:00.969562  512029 cri.go:89] found id: "ea8c7aa6a69f9b8476c9c28a6ac0944597fdf78921727f3c142c41a2b6a9bb00"
	I1017 19:02:00.969569  512029 cri.go:89] found id: "05b0d75fa7e337102c5b778d87f16ae508e704efb9367ba5a98cc93f0460d03c"
	I1017 19:02:00.969573  512029 cri.go:89] found id: "c8959e94a4c121db6d2c59fccf2f1725ca1521aca59330c8262847404ff4a854"
	I1017 19:02:00.969586  512029 cri.go:89] found id: "d6a7317aabf4df8eb271b0bf784be0c6045d3ed3d186ebfc5869cb018026ecfd"
	I1017 19:02:00.969592  512029 cri.go:89] found id: "49aea2d7818a2bf7202542ba97e6f5d99fd7c496045a1c05fbd5332046a05e6f"
	I1017 19:02:00.969599  512029 cri.go:89] found id: "43e40655463cffe530b5aa16eb8ff13e3891f57f9034e26ef39cd927af2c8e4a"
	I1017 19:02:00.969603  512029 cri.go:89] found id: "8b60fdbdcbbd68792cea1184b624381c87a1f1eed5a416aa91d0007baad72c0d"
	I1017 19:02:00.969608  512029 cri.go:89] found id: "44a3d62e9e439ad0c55eef8ceec2ced7e9b2897150b415717801bf2686765caa"
	I1017 19:02:00.969613  512029 cri.go:89] found id: "a76bbc48e30da642f43c612cdc6a0a786d2a6d1c4942a22be68e5c4a9a6f40f9"
	I1017 19:02:00.969619  512029 cri.go:89] found id: ""
	I1017 19:02:00.969710  512029 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:02:00.984751  512029 out.go:203] 
	W1017 19:02:00.986014  512029 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:02:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:02:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:02:00.986040  512029 out.go:285] * 
	* 
	W1017 19:02:00.990128  512029 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:02:00.991560  512029 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-642189 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (149.66s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-862fn" [f2e1217b-8eb7-4a86-a5d9-112d704636d2] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004574485s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-642189 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-642189 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (244.064832ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 18:59:36.435539  508689 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:59:36.435844  508689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:36.435855  508689 out.go:374] Setting ErrFile to fd 2...
	I1017 18:59:36.435859  508689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:36.436084  508689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 18:59:36.436382  508689 mustload.go:65] Loading cluster: addons-642189
	I1017 18:59:36.436780  508689 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:36.436803  508689 addons.go:606] checking whether the cluster is paused
	I1017 18:59:36.436930  508689 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:36.436946  508689 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:59:36.437373  508689 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:59:36.455637  508689 ssh_runner.go:195] Run: systemctl --version
	I1017 18:59:36.455723  508689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:59:36.473790  508689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:59:36.571154  508689 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 18:59:36.571230  508689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 18:59:36.602813  508689 cri.go:89] found id: "621b748d538846f79bca883df087ce87a58a6e5cc5dbbb8f2ae4845785e122d6"
	I1017 18:59:36.602833  508689 cri.go:89] found id: "317712e1d5627e3d52413fcacd6f0a3e40e74b682567b7117acb2ddbf4da2a72"
	I1017 18:59:36.602836  508689 cri.go:89] found id: "c8951bd4e7631f9e6fa9ad944251500dd44cda63d891f7b553931aa3ef22e7e7"
	I1017 18:59:36.602840  508689 cri.go:89] found id: "6073132bac88bd54a1f9014aa1b74b68de0ac557ac483e5c0a7ff51ac939a2dd"
	I1017 18:59:36.602842  508689 cri.go:89] found id: "d3882b8636526fe7f302d3351b33fc68a8df5109463693af6642869704a2b6a2"
	I1017 18:59:36.602846  508689 cri.go:89] found id: "99fe19979e6f782cb4ff1df09e72c9c58535540daf8c28b63b4a3f1719cfa365"
	I1017 18:59:36.602849  508689 cri.go:89] found id: "600ce5e0b6a8556aa7c055afc13692cb00b2d6f0a82ba6d0817e5e424b49881c"
	I1017 18:59:36.602851  508689 cri.go:89] found id: "214596c066d6ebce81c069cda2c2790ee022d3770221e7e183390decc49e626b"
	I1017 18:59:36.602854  508689 cri.go:89] found id: "cd49fb8b1ee5ca17b886edf352059ec13c3aa8fed46c8383c6660656f0403d67"
	I1017 18:59:36.602864  508689 cri.go:89] found id: "b3f4b36a5cb43ddf3de225e5f08ad4b2d165ae6234c165082ae1316a48f48425"
	I1017 18:59:36.602867  508689 cri.go:89] found id: "fc47a341f594c2a4203992ac73a7c89fb4722c54e399f0949c3604dfa81f70ef"
	I1017 18:59:36.602869  508689 cri.go:89] found id: "d3140eef7e893c98db6a57843c26dd0767733610bfaab45265577ad3a64a334e"
	I1017 18:59:36.602872  508689 cri.go:89] found id: "bce8d27694469a014387080e94f416e3cfb88071ea69506ea8a1d04b16176e43"
	I1017 18:59:36.602874  508689 cri.go:89] found id: "26a77f9d8fd20d481d8ec7b0d85a65954b10af33ae4994293044ad2067b41872"
	I1017 18:59:36.602877  508689 cri.go:89] found id: "afa9f6b0496818adf412ab2c3cf979e86f2593796860b7b9e53c8fd85f0fe586"
	I1017 18:59:36.602882  508689 cri.go:89] found id: "ea8c7aa6a69f9b8476c9c28a6ac0944597fdf78921727f3c142c41a2b6a9bb00"
	I1017 18:59:36.602886  508689 cri.go:89] found id: "05b0d75fa7e337102c5b778d87f16ae508e704efb9367ba5a98cc93f0460d03c"
	I1017 18:59:36.602892  508689 cri.go:89] found id: "c8959e94a4c121db6d2c59fccf2f1725ca1521aca59330c8262847404ff4a854"
	I1017 18:59:36.602897  508689 cri.go:89] found id: "d6a7317aabf4df8eb271b0bf784be0c6045d3ed3d186ebfc5869cb018026ecfd"
	I1017 18:59:36.602900  508689 cri.go:89] found id: "49aea2d7818a2bf7202542ba97e6f5d99fd7c496045a1c05fbd5332046a05e6f"
	I1017 18:59:36.602904  508689 cri.go:89] found id: "43e40655463cffe530b5aa16eb8ff13e3891f57f9034e26ef39cd927af2c8e4a"
	I1017 18:59:36.602908  508689 cri.go:89] found id: "8b60fdbdcbbd68792cea1184b624381c87a1f1eed5a416aa91d0007baad72c0d"
	I1017 18:59:36.602912  508689 cri.go:89] found id: "44a3d62e9e439ad0c55eef8ceec2ced7e9b2897150b415717801bf2686765caa"
	I1017 18:59:36.602917  508689 cri.go:89] found id: "a76bbc48e30da642f43c612cdc6a0a786d2a6d1c4942a22be68e5c4a9a6f40f9"
	I1017 18:59:36.602924  508689 cri.go:89] found id: ""
	I1017 18:59:36.602965  508689 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 18:59:36.617948  508689 out.go:203] 
	W1017 18:59:36.619447  508689 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 18:59:36.619479  508689 out.go:285] * 
	* 
	W1017 18:59:36.624360  508689 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 18:59:36.626172  508689 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-642189 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.324313ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-7d6xn" [3877854d-d5e2-4181-ba78-988a54712111] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004584567s
addons_test.go:463: (dbg) Run:  kubectl --context addons-642189 top pods -n kube-system
2025/10/17 18:59:31 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-642189 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-642189 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (255.698817ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 18:59:31.179792  507336 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:59:31.179885  507336 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:31.179891  507336 out.go:374] Setting ErrFile to fd 2...
	I1017 18:59:31.179895  507336 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:31.180145  507336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 18:59:31.180434  507336 mustload.go:65] Loading cluster: addons-642189
	I1017 18:59:31.180784  507336 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:31.180800  507336 addons.go:606] checking whether the cluster is paused
	I1017 18:59:31.180880  507336 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:31.180895  507336 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:59:31.181346  507336 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:59:31.200735  507336 ssh_runner.go:195] Run: systemctl --version
	I1017 18:59:31.200805  507336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:59:31.219337  507336 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:59:31.319229  507336 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 18:59:31.319324  507336 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 18:59:31.351935  507336 cri.go:89] found id: "621b748d538846f79bca883df087ce87a58a6e5cc5dbbb8f2ae4845785e122d6"
	I1017 18:59:31.351957  507336 cri.go:89] found id: "317712e1d5627e3d52413fcacd6f0a3e40e74b682567b7117acb2ddbf4da2a72"
	I1017 18:59:31.351961  507336 cri.go:89] found id: "c8951bd4e7631f9e6fa9ad944251500dd44cda63d891f7b553931aa3ef22e7e7"
	I1017 18:59:31.351965  507336 cri.go:89] found id: "6073132bac88bd54a1f9014aa1b74b68de0ac557ac483e5c0a7ff51ac939a2dd"
	I1017 18:59:31.351967  507336 cri.go:89] found id: "d3882b8636526fe7f302d3351b33fc68a8df5109463693af6642869704a2b6a2"
	I1017 18:59:31.351972  507336 cri.go:89] found id: "99fe19979e6f782cb4ff1df09e72c9c58535540daf8c28b63b4a3f1719cfa365"
	I1017 18:59:31.351975  507336 cri.go:89] found id: "600ce5e0b6a8556aa7c055afc13692cb00b2d6f0a82ba6d0817e5e424b49881c"
	I1017 18:59:31.351977  507336 cri.go:89] found id: "214596c066d6ebce81c069cda2c2790ee022d3770221e7e183390decc49e626b"
	I1017 18:59:31.351980  507336 cri.go:89] found id: "cd49fb8b1ee5ca17b886edf352059ec13c3aa8fed46c8383c6660656f0403d67"
	I1017 18:59:31.351991  507336 cri.go:89] found id: "b3f4b36a5cb43ddf3de225e5f08ad4b2d165ae6234c165082ae1316a48f48425"
	I1017 18:59:31.351994  507336 cri.go:89] found id: "fc47a341f594c2a4203992ac73a7c89fb4722c54e399f0949c3604dfa81f70ef"
	I1017 18:59:31.351996  507336 cri.go:89] found id: "d3140eef7e893c98db6a57843c26dd0767733610bfaab45265577ad3a64a334e"
	I1017 18:59:31.351999  507336 cri.go:89] found id: "bce8d27694469a014387080e94f416e3cfb88071ea69506ea8a1d04b16176e43"
	I1017 18:59:31.352001  507336 cri.go:89] found id: "26a77f9d8fd20d481d8ec7b0d85a65954b10af33ae4994293044ad2067b41872"
	I1017 18:59:31.352004  507336 cri.go:89] found id: "afa9f6b0496818adf412ab2c3cf979e86f2593796860b7b9e53c8fd85f0fe586"
	I1017 18:59:31.352008  507336 cri.go:89] found id: "ea8c7aa6a69f9b8476c9c28a6ac0944597fdf78921727f3c142c41a2b6a9bb00"
	I1017 18:59:31.352011  507336 cri.go:89] found id: "05b0d75fa7e337102c5b778d87f16ae508e704efb9367ba5a98cc93f0460d03c"
	I1017 18:59:31.352016  507336 cri.go:89] found id: "c8959e94a4c121db6d2c59fccf2f1725ca1521aca59330c8262847404ff4a854"
	I1017 18:59:31.352018  507336 cri.go:89] found id: "d6a7317aabf4df8eb271b0bf784be0c6045d3ed3d186ebfc5869cb018026ecfd"
	I1017 18:59:31.352027  507336 cri.go:89] found id: "49aea2d7818a2bf7202542ba97e6f5d99fd7c496045a1c05fbd5332046a05e6f"
	I1017 18:59:31.352032  507336 cri.go:89] found id: "43e40655463cffe530b5aa16eb8ff13e3891f57f9034e26ef39cd927af2c8e4a"
	I1017 18:59:31.352037  507336 cri.go:89] found id: "8b60fdbdcbbd68792cea1184b624381c87a1f1eed5a416aa91d0007baad72c0d"
	I1017 18:59:31.352039  507336 cri.go:89] found id: "44a3d62e9e439ad0c55eef8ceec2ced7e9b2897150b415717801bf2686765caa"
	I1017 18:59:31.352042  507336 cri.go:89] found id: "a76bbc48e30da642f43c612cdc6a0a786d2a6d1c4942a22be68e5c4a9a6f40f9"
	I1017 18:59:31.352044  507336 cri.go:89] found id: ""
	I1017 18:59:31.352082  507336 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 18:59:31.367485  507336 out.go:203] 
	W1017 18:59:31.368993  507336 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 18:59:31.369013  507336 out.go:285] * 
	* 
	W1017 18:59:31.373141  507336 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 18:59:31.374724  507336 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-642189 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.34s)

                                                
                                    
x
+
TestAddons/parallel/CSI (35.19s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1017 18:59:29.683296  495725 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1017 18:59:29.687712  495725 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1017 18:59:29.687745  495725 kapi.go:107] duration metric: took 4.467201ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.482132ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-642189 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-642189 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [8084ba12-ff0c-4c53-a039-934b36c72750] Pending
helpers_test.go:352: "task-pv-pod" [8084ba12-ff0c-4c53-a039-934b36c72750] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [8084ba12-ff0c-4c53-a039-934b36c72750] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.004087342s
addons_test.go:572: (dbg) Run:  kubectl --context addons-642189 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-642189 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-642189 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-642189 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-642189 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-642189 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-642189 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [c502d1e4-f363-4862-9300-d5c4d58f0908] Pending
helpers_test.go:352: "task-pv-pod-restore" [c502d1e4-f363-4862-9300-d5c4d58f0908] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [c502d1e4-f363-4862-9300-d5c4d58f0908] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004591232s
addons_test.go:614: (dbg) Run:  kubectl --context addons-642189 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-642189 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-642189 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-642189 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-642189 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (239.690964ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:00:04.431677  509669 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:00:04.432028  509669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:00:04.432040  509669 out.go:374] Setting ErrFile to fd 2...
	I1017 19:00:04.432045  509669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:00:04.432240  509669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:00:04.432661  509669 mustload.go:65] Loading cluster: addons-642189
	I1017 19:00:04.433153  509669 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:00:04.433180  509669 addons.go:606] checking whether the cluster is paused
	I1017 19:00:04.433319  509669 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:00:04.433346  509669 host.go:66] Checking if "addons-642189" exists ...
	I1017 19:00:04.433805  509669 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 19:00:04.452216  509669 ssh_runner.go:195] Run: systemctl --version
	I1017 19:00:04.452282  509669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 19:00:04.470130  509669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 19:00:04.565876  509669 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:00:04.565956  509669 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:00:04.596728  509669 cri.go:89] found id: "621b748d538846f79bca883df087ce87a58a6e5cc5dbbb8f2ae4845785e122d6"
	I1017 19:00:04.596751  509669 cri.go:89] found id: "317712e1d5627e3d52413fcacd6f0a3e40e74b682567b7117acb2ddbf4da2a72"
	I1017 19:00:04.596755  509669 cri.go:89] found id: "c8951bd4e7631f9e6fa9ad944251500dd44cda63d891f7b553931aa3ef22e7e7"
	I1017 19:00:04.596758  509669 cri.go:89] found id: "6073132bac88bd54a1f9014aa1b74b68de0ac557ac483e5c0a7ff51ac939a2dd"
	I1017 19:00:04.596761  509669 cri.go:89] found id: "d3882b8636526fe7f302d3351b33fc68a8df5109463693af6642869704a2b6a2"
	I1017 19:00:04.596764  509669 cri.go:89] found id: "99fe19979e6f782cb4ff1df09e72c9c58535540daf8c28b63b4a3f1719cfa365"
	I1017 19:00:04.596766  509669 cri.go:89] found id: "600ce5e0b6a8556aa7c055afc13692cb00b2d6f0a82ba6d0817e5e424b49881c"
	I1017 19:00:04.596768  509669 cri.go:89] found id: "214596c066d6ebce81c069cda2c2790ee022d3770221e7e183390decc49e626b"
	I1017 19:00:04.596771  509669 cri.go:89] found id: "cd49fb8b1ee5ca17b886edf352059ec13c3aa8fed46c8383c6660656f0403d67"
	I1017 19:00:04.596776  509669 cri.go:89] found id: "b3f4b36a5cb43ddf3de225e5f08ad4b2d165ae6234c165082ae1316a48f48425"
	I1017 19:00:04.596792  509669 cri.go:89] found id: "fc47a341f594c2a4203992ac73a7c89fb4722c54e399f0949c3604dfa81f70ef"
	I1017 19:00:04.596798  509669 cri.go:89] found id: "d3140eef7e893c98db6a57843c26dd0767733610bfaab45265577ad3a64a334e"
	I1017 19:00:04.596800  509669 cri.go:89] found id: "bce8d27694469a014387080e94f416e3cfb88071ea69506ea8a1d04b16176e43"
	I1017 19:00:04.596803  509669 cri.go:89] found id: "26a77f9d8fd20d481d8ec7b0d85a65954b10af33ae4994293044ad2067b41872"
	I1017 19:00:04.596806  509669 cri.go:89] found id: "afa9f6b0496818adf412ab2c3cf979e86f2593796860b7b9e53c8fd85f0fe586"
	I1017 19:00:04.596816  509669 cri.go:89] found id: "ea8c7aa6a69f9b8476c9c28a6ac0944597fdf78921727f3c142c41a2b6a9bb00"
	I1017 19:00:04.596826  509669 cri.go:89] found id: "05b0d75fa7e337102c5b778d87f16ae508e704efb9367ba5a98cc93f0460d03c"
	I1017 19:00:04.596830  509669 cri.go:89] found id: "c8959e94a4c121db6d2c59fccf2f1725ca1521aca59330c8262847404ff4a854"
	I1017 19:00:04.596833  509669 cri.go:89] found id: "d6a7317aabf4df8eb271b0bf784be0c6045d3ed3d186ebfc5869cb018026ecfd"
	I1017 19:00:04.596835  509669 cri.go:89] found id: "49aea2d7818a2bf7202542ba97e6f5d99fd7c496045a1c05fbd5332046a05e6f"
	I1017 19:00:04.596837  509669 cri.go:89] found id: "43e40655463cffe530b5aa16eb8ff13e3891f57f9034e26ef39cd927af2c8e4a"
	I1017 19:00:04.596840  509669 cri.go:89] found id: "8b60fdbdcbbd68792cea1184b624381c87a1f1eed5a416aa91d0007baad72c0d"
	I1017 19:00:04.596842  509669 cri.go:89] found id: "44a3d62e9e439ad0c55eef8ceec2ced7e9b2897150b415717801bf2686765caa"
	I1017 19:00:04.596845  509669 cri.go:89] found id: "a76bbc48e30da642f43c612cdc6a0a786d2a6d1c4942a22be68e5c4a9a6f40f9"
	I1017 19:00:04.596850  509669 cri.go:89] found id: ""
	I1017 19:00:04.596889  509669 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:00:04.611571  509669 out.go:203] 
	W1017 19:00:04.612856  509669 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:00:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:00:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:00:04.612875  509669 out.go:285] * 
	* 
	W1017 19:00:04.616972  509669 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:00:04.618845  509669 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-642189 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-642189 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-642189 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (249.390842ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:00:04.673991  509728 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:00:04.674105  509728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:00:04.674113  509728 out.go:374] Setting ErrFile to fd 2...
	I1017 19:00:04.674117  509728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:00:04.674367  509728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:00:04.674642  509728 mustload.go:65] Loading cluster: addons-642189
	I1017 19:00:04.675028  509728 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:00:04.675053  509728 addons.go:606] checking whether the cluster is paused
	I1017 19:00:04.675140  509728 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:00:04.675153  509728 host.go:66] Checking if "addons-642189" exists ...
	I1017 19:00:04.675564  509728 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 19:00:04.694252  509728 ssh_runner.go:195] Run: systemctl --version
	I1017 19:00:04.694316  509728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 19:00:04.713129  509728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 19:00:04.809149  509728 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:00:04.809252  509728 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:00:04.842536  509728 cri.go:89] found id: "621b748d538846f79bca883df087ce87a58a6e5cc5dbbb8f2ae4845785e122d6"
	I1017 19:00:04.842564  509728 cri.go:89] found id: "317712e1d5627e3d52413fcacd6f0a3e40e74b682567b7117acb2ddbf4da2a72"
	I1017 19:00:04.842570  509728 cri.go:89] found id: "c8951bd4e7631f9e6fa9ad944251500dd44cda63d891f7b553931aa3ef22e7e7"
	I1017 19:00:04.842575  509728 cri.go:89] found id: "6073132bac88bd54a1f9014aa1b74b68de0ac557ac483e5c0a7ff51ac939a2dd"
	I1017 19:00:04.842580  509728 cri.go:89] found id: "d3882b8636526fe7f302d3351b33fc68a8df5109463693af6642869704a2b6a2"
	I1017 19:00:04.842586  509728 cri.go:89] found id: "99fe19979e6f782cb4ff1df09e72c9c58535540daf8c28b63b4a3f1719cfa365"
	I1017 19:00:04.842591  509728 cri.go:89] found id: "600ce5e0b6a8556aa7c055afc13692cb00b2d6f0a82ba6d0817e5e424b49881c"
	I1017 19:00:04.842595  509728 cri.go:89] found id: "214596c066d6ebce81c069cda2c2790ee022d3770221e7e183390decc49e626b"
	I1017 19:00:04.842599  509728 cri.go:89] found id: "cd49fb8b1ee5ca17b886edf352059ec13c3aa8fed46c8383c6660656f0403d67"
	I1017 19:00:04.842620  509728 cri.go:89] found id: "b3f4b36a5cb43ddf3de225e5f08ad4b2d165ae6234c165082ae1316a48f48425"
	I1017 19:00:04.842629  509728 cri.go:89] found id: "fc47a341f594c2a4203992ac73a7c89fb4722c54e399f0949c3604dfa81f70ef"
	I1017 19:00:04.842633  509728 cri.go:89] found id: "d3140eef7e893c98db6a57843c26dd0767733610bfaab45265577ad3a64a334e"
	I1017 19:00:04.842637  509728 cri.go:89] found id: "bce8d27694469a014387080e94f416e3cfb88071ea69506ea8a1d04b16176e43"
	I1017 19:00:04.842640  509728 cri.go:89] found id: "26a77f9d8fd20d481d8ec7b0d85a65954b10af33ae4994293044ad2067b41872"
	I1017 19:00:04.842644  509728 cri.go:89] found id: "afa9f6b0496818adf412ab2c3cf979e86f2593796860b7b9e53c8fd85f0fe586"
	I1017 19:00:04.842651  509728 cri.go:89] found id: "ea8c7aa6a69f9b8476c9c28a6ac0944597fdf78921727f3c142c41a2b6a9bb00"
	I1017 19:00:04.842659  509728 cri.go:89] found id: "05b0d75fa7e337102c5b778d87f16ae508e704efb9367ba5a98cc93f0460d03c"
	I1017 19:00:04.842666  509728 cri.go:89] found id: "c8959e94a4c121db6d2c59fccf2f1725ca1521aca59330c8262847404ff4a854"
	I1017 19:00:04.842670  509728 cri.go:89] found id: "d6a7317aabf4df8eb271b0bf784be0c6045d3ed3d186ebfc5869cb018026ecfd"
	I1017 19:00:04.842674  509728 cri.go:89] found id: "49aea2d7818a2bf7202542ba97e6f5d99fd7c496045a1c05fbd5332046a05e6f"
	I1017 19:00:04.842701  509728 cri.go:89] found id: "43e40655463cffe530b5aa16eb8ff13e3891f57f9034e26ef39cd927af2c8e4a"
	I1017 19:00:04.842710  509728 cri.go:89] found id: "8b60fdbdcbbd68792cea1184b624381c87a1f1eed5a416aa91d0007baad72c0d"
	I1017 19:00:04.842714  509728 cri.go:89] found id: "44a3d62e9e439ad0c55eef8ceec2ced7e9b2897150b415717801bf2686765caa"
	I1017 19:00:04.842718  509728 cri.go:89] found id: "a76bbc48e30da642f43c612cdc6a0a786d2a6d1c4942a22be68e5c4a9a6f40f9"
	I1017 19:00:04.842722  509728 cri.go:89] found id: ""
	I1017 19:00:04.842793  509728 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:00:04.861140  509728 out.go:203] 
	W1017 19:00:04.862407  509728 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:00:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:00:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:00:04.862424  509728 out.go:285] * 
	* 
	W1017 19:00:04.866465  509728 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:00:04.867971  509728 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-642189 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (35.19s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-642189 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-642189 --alsologtostderr -v=1: exit status 11 (243.935104ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 18:59:18.214845  505815 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:59:18.215083  505815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:18.215091  505815 out.go:374] Setting ErrFile to fd 2...
	I1017 18:59:18.215096  505815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:18.215338  505815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 18:59:18.215639  505815 mustload.go:65] Loading cluster: addons-642189
	I1017 18:59:18.216030  505815 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:18.216051  505815 addons.go:606] checking whether the cluster is paused
	I1017 18:59:18.216141  505815 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:18.216154  505815 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:59:18.216557  505815 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:59:18.235981  505815 ssh_runner.go:195] Run: systemctl --version
	I1017 18:59:18.236044  505815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:59:18.254522  505815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:59:18.351111  505815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 18:59:18.351227  505815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 18:59:18.383209  505815 cri.go:89] found id: "621b748d538846f79bca883df087ce87a58a6e5cc5dbbb8f2ae4845785e122d6"
	I1017 18:59:18.383228  505815 cri.go:89] found id: "317712e1d5627e3d52413fcacd6f0a3e40e74b682567b7117acb2ddbf4da2a72"
	I1017 18:59:18.383232  505815 cri.go:89] found id: "c8951bd4e7631f9e6fa9ad944251500dd44cda63d891f7b553931aa3ef22e7e7"
	I1017 18:59:18.383235  505815 cri.go:89] found id: "6073132bac88bd54a1f9014aa1b74b68de0ac557ac483e5c0a7ff51ac939a2dd"
	I1017 18:59:18.383238  505815 cri.go:89] found id: "d3882b8636526fe7f302d3351b33fc68a8df5109463693af6642869704a2b6a2"
	I1017 18:59:18.383243  505815 cri.go:89] found id: "99fe19979e6f782cb4ff1df09e72c9c58535540daf8c28b63b4a3f1719cfa365"
	I1017 18:59:18.383245  505815 cri.go:89] found id: "600ce5e0b6a8556aa7c055afc13692cb00b2d6f0a82ba6d0817e5e424b49881c"
	I1017 18:59:18.383248  505815 cri.go:89] found id: "214596c066d6ebce81c069cda2c2790ee022d3770221e7e183390decc49e626b"
	I1017 18:59:18.383250  505815 cri.go:89] found id: "cd49fb8b1ee5ca17b886edf352059ec13c3aa8fed46c8383c6660656f0403d67"
	I1017 18:59:18.383261  505815 cri.go:89] found id: "b3f4b36a5cb43ddf3de225e5f08ad4b2d165ae6234c165082ae1316a48f48425"
	I1017 18:59:18.383264  505815 cri.go:89] found id: "fc47a341f594c2a4203992ac73a7c89fb4722c54e399f0949c3604dfa81f70ef"
	I1017 18:59:18.383267  505815 cri.go:89] found id: "d3140eef7e893c98db6a57843c26dd0767733610bfaab45265577ad3a64a334e"
	I1017 18:59:18.383269  505815 cri.go:89] found id: "bce8d27694469a014387080e94f416e3cfb88071ea69506ea8a1d04b16176e43"
	I1017 18:59:18.383272  505815 cri.go:89] found id: "26a77f9d8fd20d481d8ec7b0d85a65954b10af33ae4994293044ad2067b41872"
	I1017 18:59:18.383274  505815 cri.go:89] found id: "afa9f6b0496818adf412ab2c3cf979e86f2593796860b7b9e53c8fd85f0fe586"
	I1017 18:59:18.383278  505815 cri.go:89] found id: "ea8c7aa6a69f9b8476c9c28a6ac0944597fdf78921727f3c142c41a2b6a9bb00"
	I1017 18:59:18.383281  505815 cri.go:89] found id: "05b0d75fa7e337102c5b778d87f16ae508e704efb9367ba5a98cc93f0460d03c"
	I1017 18:59:18.383286  505815 cri.go:89] found id: "c8959e94a4c121db6d2c59fccf2f1725ca1521aca59330c8262847404ff4a854"
	I1017 18:59:18.383290  505815 cri.go:89] found id: "d6a7317aabf4df8eb271b0bf784be0c6045d3ed3d186ebfc5869cb018026ecfd"
	I1017 18:59:18.383294  505815 cri.go:89] found id: "49aea2d7818a2bf7202542ba97e6f5d99fd7c496045a1c05fbd5332046a05e6f"
	I1017 18:59:18.383297  505815 cri.go:89] found id: "43e40655463cffe530b5aa16eb8ff13e3891f57f9034e26ef39cd927af2c8e4a"
	I1017 18:59:18.383301  505815 cri.go:89] found id: "8b60fdbdcbbd68792cea1184b624381c87a1f1eed5a416aa91d0007baad72c0d"
	I1017 18:59:18.383305  505815 cri.go:89] found id: "44a3d62e9e439ad0c55eef8ceec2ced7e9b2897150b415717801bf2686765caa"
	I1017 18:59:18.383331  505815 cri.go:89] found id: "a76bbc48e30da642f43c612cdc6a0a786d2a6d1c4942a22be68e5c4a9a6f40f9"
	I1017 18:59:18.383338  505815 cri.go:89] found id: ""
	I1017 18:59:18.383393  505815 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 18:59:18.398139  505815 out.go:203] 
	W1017 18:59:18.399370  505815 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 18:59:18.399417  505815 out.go:285] * 
	* 
	W1017 18:59:18.403560  505815 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 18:59:18.404972  505815 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-642189 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-642189
helpers_test.go:243: (dbg) docker inspect addons-642189:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "810df9073b89c74eff4799d0df8a6ca8a8bd99720281790a1fc39583f9548eb3",
	        "Created": "2025-10-17T18:56:47.345619046Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 497687,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T18:56:47.386313203Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/810df9073b89c74eff4799d0df8a6ca8a8bd99720281790a1fc39583f9548eb3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/810df9073b89c74eff4799d0df8a6ca8a8bd99720281790a1fc39583f9548eb3/hostname",
	        "HostsPath": "/var/lib/docker/containers/810df9073b89c74eff4799d0df8a6ca8a8bd99720281790a1fc39583f9548eb3/hosts",
	        "LogPath": "/var/lib/docker/containers/810df9073b89c74eff4799d0df8a6ca8a8bd99720281790a1fc39583f9548eb3/810df9073b89c74eff4799d0df8a6ca8a8bd99720281790a1fc39583f9548eb3-json.log",
	        "Name": "/addons-642189",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-642189:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-642189",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "810df9073b89c74eff4799d0df8a6ca8a8bd99720281790a1fc39583f9548eb3",
	                "LowerDir": "/var/lib/docker/overlay2/744567c26d3445f0286a6368c84803ddd87746d653da866f782f5056f17193d9-init/diff:/var/lib/docker/overlay2/dbfb6a42e05d15debefb7c829b0dbabbe558b70da40f1ab4f30d27e0dda96088/diff",
	                "MergedDir": "/var/lib/docker/overlay2/744567c26d3445f0286a6368c84803ddd87746d653da866f782f5056f17193d9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/744567c26d3445f0286a6368c84803ddd87746d653da866f782f5056f17193d9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/744567c26d3445f0286a6368c84803ddd87746d653da866f782f5056f17193d9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-642189",
	                "Source": "/var/lib/docker/volumes/addons-642189/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-642189",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-642189",
	                "name.minikube.sigs.k8s.io": "addons-642189",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "68f31a3be24d6cd663a3cb3519d845dad847ca6f875fe3ab42e4c3255fba7d5b",
	            "SandboxKey": "/var/run/docker/netns/68f31a3be24d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-642189": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:82:83:4c:53:70",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c18e6eaa32c599bc5ecf999057629d81e48002de288024396da5438376dc6ea7",
	                    "EndpointID": "6b552c996a11764d7fd56d185c5a76c5b24251a546322fbc09de96d261801c13",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-642189",
	                        "810df9073b89"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-642189 -n addons-642189
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-642189 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-642189 logs -n 25: (1.193016528s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-116436 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-116436   │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-116436                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-116436   │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ start   │ -o=json --download-only -p download-only-808492 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-808492   │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-808492                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-808492   │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-116436                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-116436   │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-808492                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-808492   │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ start   │ --download-only -p download-docker-352708 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-352708 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ delete  │ -p download-docker-352708                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-352708 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ start   │ --download-only -p binary-mirror-386230 --alsologtostderr --binary-mirror http://127.0.0.1:41417 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-386230   │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ delete  │ -p binary-mirror-386230                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-386230   │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ addons  │ disable dashboard -p addons-642189                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-642189          │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ addons  │ enable dashboard -p addons-642189                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-642189          │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ start   │ -p addons-642189 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-642189          │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:59 UTC │
	│ addons  │ addons-642189 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-642189          │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-642189 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-642189          │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ addons  │ enable headlamp -p addons-642189 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-642189          │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 18:56:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 18:56:23.507351  497052 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:56:23.507656  497052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:23.507668  497052 out.go:374] Setting ErrFile to fd 2...
	I1017 18:56:23.507673  497052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:23.507931  497052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 18:56:23.508553  497052 out.go:368] Setting JSON to false
	I1017 18:56:23.509607  497052 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9522,"bootTime":1760717861,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 18:56:23.509729  497052 start.go:141] virtualization: kvm guest
	I1017 18:56:23.511775  497052 out.go:179] * [addons-642189] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 18:56:23.513138  497052 notify.go:220] Checking for updates...
	I1017 18:56:23.513165  497052 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 18:56:23.514764  497052 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 18:56:23.516385  497052 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 18:56:23.517781  497052 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 18:56:23.518988  497052 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 18:56:23.520177  497052 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 18:56:23.521466  497052 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 18:56:23.544817  497052 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 18:56:23.544957  497052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 18:56:23.607417  497052 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-17 18:56:23.596926247 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 18:56:23.607598  497052 docker.go:318] overlay module found
	I1017 18:56:23.609480  497052 out.go:179] * Using the docker driver based on user configuration
	I1017 18:56:23.610765  497052 start.go:305] selected driver: docker
	I1017 18:56:23.610783  497052 start.go:925] validating driver "docker" against <nil>
	I1017 18:56:23.610796  497052 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 18:56:23.611452  497052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 18:56:23.667517  497052 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-17 18:56:23.65761722 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 18:56:23.667713  497052 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 18:56:23.667915  497052 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 18:56:23.669671  497052 out.go:179] * Using Docker driver with root privileges
	I1017 18:56:23.670744  497052 cni.go:84] Creating CNI manager for ""
	I1017 18:56:23.670804  497052 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 18:56:23.670814  497052 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 18:56:23.670885  497052 start.go:349] cluster config:
	{Name:addons-642189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-642189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1017 18:56:23.672139  497052 out.go:179] * Starting "addons-642189" primary control-plane node in "addons-642189" cluster
	I1017 18:56:23.673339  497052 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 18:56:23.674462  497052 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 18:56:23.675534  497052 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 18:56:23.675571  497052 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 18:56:23.675581  497052 cache.go:58] Caching tarball of preloaded images
	I1017 18:56:23.675655  497052 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 18:56:23.675673  497052 preload.go:233] Found /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 18:56:23.675695  497052 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 18:56:23.676034  497052 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/config.json ...
	I1017 18:56:23.676060  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/config.json: {Name:mkcde08ab33d0282fa7fc0a52d8a6d2246e9d73f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:23.692532  497052 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1017 18:56:23.692675  497052 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1017 18:56:23.692719  497052 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1017 18:56:23.692729  497052 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1017 18:56:23.692740  497052 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1017 18:56:23.692749  497052 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1017 18:56:35.637952  497052 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1017 18:56:35.637996  497052 cache.go:232] Successfully downloaded all kic artifacts
	I1017 18:56:35.638046  497052 start.go:360] acquireMachinesLock for addons-642189: {Name:mk981f556bc62a56e256ed48011138888bf0d350 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 18:56:35.638202  497052 start.go:364] duration metric: took 115.785µs to acquireMachinesLock for "addons-642189"
	I1017 18:56:35.638238  497052 start.go:93] Provisioning new machine with config: &{Name:addons-642189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-642189 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 18:56:35.638320  497052 start.go:125] createHost starting for "" (driver="docker")
	I1017 18:56:35.640279  497052 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1017 18:56:35.640566  497052 start.go:159] libmachine.API.Create for "addons-642189" (driver="docker")
	I1017 18:56:35.640608  497052 client.go:168] LocalClient.Create starting
	I1017 18:56:35.640790  497052 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem
	I1017 18:56:35.758344  497052 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem
	I1017 18:56:36.285762  497052 cli_runner.go:164] Run: docker network inspect addons-642189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 18:56:36.303852  497052 cli_runner.go:211] docker network inspect addons-642189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 18:56:36.303932  497052 network_create.go:284] running [docker network inspect addons-642189] to gather additional debugging logs...
	I1017 18:56:36.304036  497052 cli_runner.go:164] Run: docker network inspect addons-642189
	W1017 18:56:36.321773  497052 cli_runner.go:211] docker network inspect addons-642189 returned with exit code 1
	I1017 18:56:36.321839  497052 network_create.go:287] error running [docker network inspect addons-642189]: docker network inspect addons-642189: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-642189 not found
	I1017 18:56:36.321859  497052 network_create.go:289] output of [docker network inspect addons-642189]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-642189 not found
	
	** /stderr **
	I1017 18:56:36.321957  497052 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 18:56:36.339965  497052 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00162a6b0}
	I1017 18:56:36.340015  497052 network_create.go:124] attempt to create docker network addons-642189 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1017 18:56:36.340099  497052 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-642189 addons-642189
	I1017 18:56:36.401297  497052 network_create.go:108] docker network addons-642189 192.168.49.0/24 created
	I1017 18:56:36.401373  497052 kic.go:121] calculated static IP "192.168.49.2" for the "addons-642189" container
	I1017 18:56:36.401470  497052 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 18:56:36.418860  497052 cli_runner.go:164] Run: docker volume create addons-642189 --label name.minikube.sigs.k8s.io=addons-642189 --label created_by.minikube.sigs.k8s.io=true
	I1017 18:56:36.437865  497052 oci.go:103] Successfully created a docker volume addons-642189
	I1017 18:56:36.437964  497052 cli_runner.go:164] Run: docker run --rm --name addons-642189-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-642189 --entrypoint /usr/bin/test -v addons-642189:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 18:56:42.795042  497052 cli_runner.go:217] Completed: docker run --rm --name addons-642189-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-642189 --entrypoint /usr/bin/test -v addons-642189:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (6.357019308s)
	I1017 18:56:42.795116  497052 oci.go:107] Successfully prepared a docker volume addons-642189
	I1017 18:56:42.795165  497052 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 18:56:42.795197  497052 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 18:56:42.795275  497052 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-642189:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1017 18:56:47.272546  497052 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-642189:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.477214832s)
	I1017 18:56:47.272581  497052 kic.go:203] duration metric: took 4.477382627s to extract preloaded images to volume ...
	W1017 18:56:47.272678  497052 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1017 18:56:47.272749  497052 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1017 18:56:47.272791  497052 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 18:56:47.329001  497052 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-642189 --name addons-642189 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-642189 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-642189 --network addons-642189 --ip 192.168.49.2 --volume addons-642189:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 18:56:47.606142  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Running}}
	I1017 18:56:47.625600  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:56:47.644221  497052 cli_runner.go:164] Run: docker exec addons-642189 stat /var/lib/dpkg/alternatives/iptables
	I1017 18:56:47.691973  497052 oci.go:144] the created container "addons-642189" has a running status.
	I1017 18:56:47.692009  497052 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa...
	I1017 18:56:48.389604  497052 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 18:56:48.415717  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:56:48.434156  497052 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 18:56:48.434179  497052 kic_runner.go:114] Args: [docker exec --privileged addons-642189 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 18:56:48.478824  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:56:48.496258  497052 machine.go:93] provisionDockerMachine start ...
	I1017 18:56:48.496378  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:56:48.514690  497052 main.go:141] libmachine: Using SSH client type: native
	I1017 18:56:48.514942  497052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1017 18:56:48.514955  497052 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 18:56:48.649170  497052 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-642189
	
	I1017 18:56:48.649206  497052 ubuntu.go:182] provisioning hostname "addons-642189"
	I1017 18:56:48.649283  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:56:48.667879  497052 main.go:141] libmachine: Using SSH client type: native
	I1017 18:56:48.668109  497052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1017 18:56:48.668124  497052 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-642189 && echo "addons-642189" | sudo tee /etc/hostname
	I1017 18:56:48.812829  497052 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-642189
	
	I1017 18:56:48.812917  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:56:48.831243  497052 main.go:141] libmachine: Using SSH client type: native
	I1017 18:56:48.831518  497052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1017 18:56:48.831538  497052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-642189' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-642189/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-642189' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 18:56:48.965874  497052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 18:56:48.965935  497052 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-492109/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-492109/.minikube}
	I1017 18:56:48.965971  497052 ubuntu.go:190] setting up certificates
	I1017 18:56:48.965986  497052 provision.go:84] configureAuth start
	I1017 18:56:48.966061  497052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-642189
	I1017 18:56:48.985364  497052 provision.go:143] copyHostCerts
	I1017 18:56:48.985455  497052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem (1123 bytes)
	I1017 18:56:48.985568  497052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem (1679 bytes)
	I1017 18:56:48.985626  497052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem (1078 bytes)
	I1017 18:56:48.985697  497052 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem org=jenkins.addons-642189 san=[127.0.0.1 192.168.49.2 addons-642189 localhost minikube]
	I1017 18:56:49.161622  497052 provision.go:177] copyRemoteCerts
	I1017 18:56:49.161711  497052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 18:56:49.161762  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:56:49.180072  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:56:49.278715  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 18:56:49.299727  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 18:56:49.318591  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 18:56:49.337286  497052 provision.go:87] duration metric: took 371.279564ms to configureAuth
	I1017 18:56:49.337327  497052 ubuntu.go:206] setting minikube options for container-runtime
	I1017 18:56:49.337500  497052 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:56:49.337605  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:56:49.355870  497052 main.go:141] libmachine: Using SSH client type: native
	I1017 18:56:49.356105  497052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1017 18:56:49.356134  497052 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 18:56:49.609546  497052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 18:56:49.609576  497052 machine.go:96] duration metric: took 1.11329531s to provisionDockerMachine
	I1017 18:56:49.609590  497052 client.go:171] duration metric: took 13.968972026s to LocalClient.Create
	I1017 18:56:49.609616  497052 start.go:167] duration metric: took 13.9690511s to libmachine.API.Create "addons-642189"
	I1017 18:56:49.609626  497052 start.go:293] postStartSetup for "addons-642189" (driver="docker")
	I1017 18:56:49.609642  497052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 18:56:49.609734  497052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 18:56:49.609793  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:56:49.627788  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:56:49.727586  497052 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 18:56:49.731374  497052 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 18:56:49.731414  497052 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 18:56:49.731428  497052 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/addons for local assets ...
	I1017 18:56:49.731513  497052 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/files for local assets ...
	I1017 18:56:49.731556  497052 start.go:296] duration metric: took 121.92119ms for postStartSetup
	I1017 18:56:49.731923  497052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-642189
	I1017 18:56:49.749707  497052 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/config.json ...
	I1017 18:56:49.749992  497052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 18:56:49.750035  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:56:49.767939  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:56:49.862551  497052 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 18:56:49.867747  497052 start.go:128] duration metric: took 14.229404339s to createHost
	I1017 18:56:49.867776  497052 start.go:83] releasing machines lock for "addons-642189", held for 14.229555848s
	I1017 18:56:49.867846  497052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-642189
	I1017 18:56:49.886000  497052 ssh_runner.go:195] Run: cat /version.json
	I1017 18:56:49.886052  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:56:49.886108  497052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 18:56:49.886193  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:56:49.904941  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:56:49.904988  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:56:50.062813  497052 ssh_runner.go:195] Run: systemctl --version
	I1017 18:56:50.069615  497052 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 18:56:50.105769  497052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 18:56:50.110954  497052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 18:56:50.111020  497052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 18:56:50.139285  497052 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1017 18:56:50.139318  497052 start.go:495] detecting cgroup driver to use...
	I1017 18:56:50.139349  497052 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 18:56:50.139391  497052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 18:56:50.156632  497052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 18:56:50.169358  497052 docker.go:218] disabling cri-docker service (if available) ...
	I1017 18:56:50.169419  497052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 18:56:50.186340  497052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 18:56:50.204923  497052 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 18:56:50.283445  497052 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 18:56:50.377874  497052 docker.go:234] disabling docker service ...
	I1017 18:56:50.377953  497052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 18:56:50.397984  497052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 18:56:50.411531  497052 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 18:56:50.493875  497052 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 18:56:50.577910  497052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 18:56:50.591649  497052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 18:56:50.606805  497052 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 18:56:50.606879  497052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:56:50.618801  497052 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 18:56:50.618878  497052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:56:50.629437  497052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:56:50.638905  497052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:56:50.648869  497052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 18:56:50.657623  497052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:56:50.666930  497052 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:56:50.681247  497052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:56:50.690498  497052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 18:56:50.698638  497052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 18:56:50.706609  497052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 18:56:50.782771  497052 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 18:56:50.895648  497052 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 18:56:50.895749  497052 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 18:56:50.900095  497052 start.go:563] Will wait 60s for crictl version
	I1017 18:56:50.900162  497052 ssh_runner.go:195] Run: which crictl
	I1017 18:56:50.904255  497052 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 18:56:50.931013  497052 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 18:56:50.931112  497052 ssh_runner.go:195] Run: crio --version
	I1017 18:56:50.962129  497052 ssh_runner.go:195] Run: crio --version
	I1017 18:56:50.993567  497052 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 18:56:50.994865  497052 cli_runner.go:164] Run: docker network inspect addons-642189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 18:56:51.011944  497052 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 18:56:51.016337  497052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 18:56:51.027067  497052 kubeadm.go:883] updating cluster {Name:addons-642189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-642189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 18:56:51.027187  497052 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 18:56:51.027230  497052 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 18:56:51.060153  497052 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 18:56:51.060178  497052 crio.go:433] Images already preloaded, skipping extraction
	I1017 18:56:51.060225  497052 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 18:56:51.087679  497052 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 18:56:51.087734  497052 cache_images.go:85] Images are preloaded, skipping loading
	I1017 18:56:51.087744  497052 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1017 18:56:51.087877  497052 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-642189 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-642189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 18:56:51.087942  497052 ssh_runner.go:195] Run: crio config
	I1017 18:56:51.135280  497052 cni.go:84] Creating CNI manager for ""
	I1017 18:56:51.135306  497052 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 18:56:51.135326  497052 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 18:56:51.135353  497052 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-642189 NodeName:addons-642189 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 18:56:51.135496  497052 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-642189"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 18:56:51.135562  497052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 18:56:51.144131  497052 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 18:56:51.144235  497052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 18:56:51.153060  497052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 18:56:51.166714  497052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 18:56:51.183243  497052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1017 18:56:51.197036  497052 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1017 18:56:51.201059  497052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 18:56:51.211817  497052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 18:56:51.293428  497052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 18:56:51.321321  497052 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189 for IP: 192.168.49.2
	I1017 18:56:51.321354  497052 certs.go:195] generating shared ca certs ...
	I1017 18:56:51.321376  497052 certs.go:227] acquiring lock for ca certs: {Name:mkc97483d62151ba5c32d923dd19e3e2b3661468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:51.321514  497052 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key
	I1017 18:56:51.629873  497052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt ...
	I1017 18:56:51.629906  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt: {Name:mk440c0dfa16bb02464fbb467fa5aa87c3765bd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:51.630114  497052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key ...
	I1017 18:56:51.630126  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key: {Name:mkc9a271aa2bbc3358be01e9b4bce62869f1d064 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:51.630204  497052 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key
	I1017 18:56:51.740419  497052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.crt ...
	I1017 18:56:51.740450  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.crt: {Name:mkf94b45b8d9778becd2cdd6b12a0b633a9ae526 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:51.740620  497052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key ...
	I1017 18:56:51.740631  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key: {Name:mkce39cadc70eea20f0f21b9ae81efbd1f2d8303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:51.740714  497052 certs.go:257] generating profile certs ...
	I1017 18:56:51.740777  497052 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.key
	I1017 18:56:51.740792  497052 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt with IP's: []
	I1017 18:56:52.050258  497052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt ...
	I1017 18:56:52.050292  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: {Name:mk31ad05cd8e9966a999e9ce8772563fd937d0fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:52.050468  497052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.key ...
	I1017 18:56:52.050491  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.key: {Name:mkd8228d8fac04b24f141738f06daa560efd24a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:52.050573  497052 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.key.266c3263
	I1017 18:56:52.050592  497052 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.crt.266c3263 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1017 18:56:52.224447  497052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.crt.266c3263 ...
	I1017 18:56:52.224483  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.crt.266c3263: {Name:mkccccdd0383a0c5961d198a8ade089cc04198ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:52.224661  497052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.key.266c3263 ...
	I1017 18:56:52.224673  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.key.266c3263: {Name:mk6854fd0ecd7f2f485707f53b7d269e7aa49c9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:52.224757  497052 certs.go:382] copying /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.crt.266c3263 -> /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.crt
	I1017 18:56:52.224857  497052 certs.go:386] copying /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.key.266c3263 -> /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.key
	I1017 18:56:52.224915  497052 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/proxy-client.key
	I1017 18:56:52.224935  497052 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/proxy-client.crt with IP's: []
	I1017 18:56:52.486460  497052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/proxy-client.crt ...
	I1017 18:56:52.486493  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/proxy-client.crt: {Name:mk81e4a41268fac4df526b7a037b0d607ca1da79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:52.486661  497052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/proxy-client.key ...
	I1017 18:56:52.486675  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/proxy-client.key: {Name:mke25b84bb762b01365af8953171bb774daff27b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:52.486855  497052 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 18:56:52.486891  497052 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem (1078 bytes)
	I1017 18:56:52.486914  497052 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem (1123 bytes)
	I1017 18:56:52.486935  497052 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem (1679 bytes)
	I1017 18:56:52.487601  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 18:56:52.506255  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 18:56:52.524081  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 18:56:52.541915  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 18:56:52.560046  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1017 18:56:52.578133  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 18:56:52.596352  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 18:56:52.614649  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 18:56:52.632554  497052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 18:56:52.652890  497052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 18:56:52.666484  497052 ssh_runner.go:195] Run: openssl version
	I1017 18:56:52.672995  497052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 18:56:52.684809  497052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 18:56:52.688846  497052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1017 18:56:52.688926  497052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 18:56:52.723670  497052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 18:56:52.732978  497052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 18:56:52.736885  497052 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 18:56:52.736937  497052 kubeadm.go:400] StartCluster: {Name:addons-642189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-642189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 18:56:52.737016  497052 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 18:56:52.737064  497052 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 18:56:52.765597  497052 cri.go:89] found id: ""
	I1017 18:56:52.765695  497052 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 18:56:52.774301  497052 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 18:56:52.783040  497052 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 18:56:52.783112  497052 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 18:56:52.791264  497052 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 18:56:52.791291  497052 kubeadm.go:157] found existing configuration files:
	
	I1017 18:56:52.791341  497052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 18:56:52.799203  497052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 18:56:52.799279  497052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 18:56:52.806929  497052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 18:56:52.815246  497052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 18:56:52.815314  497052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 18:56:52.823193  497052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 18:56:52.831014  497052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 18:56:52.831078  497052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 18:56:52.838998  497052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 18:56:52.847468  497052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 18:56:52.847536  497052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 18:56:52.855379  497052 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 18:56:52.896194  497052 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 18:56:52.896314  497052 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 18:56:52.920241  497052 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 18:56:52.920364  497052 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1017 18:56:52.920449  497052 kubeadm.go:318] OS: Linux
	I1017 18:56:52.920551  497052 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 18:56:52.920638  497052 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 18:56:52.920753  497052 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 18:56:52.920827  497052 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 18:56:52.920894  497052 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 18:56:52.920960  497052 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 18:56:52.921021  497052 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 18:56:52.921103  497052 kubeadm.go:318] CGROUPS_IO: enabled
	I1017 18:56:52.986357  497052 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 18:56:52.986502  497052 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 18:56:52.986654  497052 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 18:56:52.995098  497052 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 18:56:52.997834  497052 out.go:252]   - Generating certificates and keys ...
	I1017 18:56:52.997958  497052 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 18:56:52.998028  497052 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 18:56:53.121595  497052 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 18:56:53.446235  497052 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 18:56:53.548350  497052 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 18:56:53.750146  497052 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 18:56:53.893103  497052 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 18:56:53.893244  497052 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-642189 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1017 18:56:54.008617  497052 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 18:56:54.008802  497052 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-642189 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1017 18:56:54.146010  497052 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 18:56:54.361105  497052 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 18:56:54.498218  497052 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 18:56:54.498326  497052 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 18:56:54.608762  497052 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 18:56:54.847587  497052 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 18:56:55.118157  497052 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 18:56:55.797864  497052 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 18:56:55.930740  497052 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 18:56:55.931303  497052 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 18:56:55.935312  497052 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 18:56:55.936953  497052 out.go:252]   - Booting up control plane ...
	I1017 18:56:55.937100  497052 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 18:56:55.937227  497052 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 18:56:55.937946  497052 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 18:56:55.953439  497052 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 18:56:55.953567  497052 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 18:56:55.960790  497052 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 18:56:55.960919  497052 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 18:56:55.960968  497052 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 18:56:56.058833  497052 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 18:56:56.059018  497052 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 18:56:56.559852  497052 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.233283ms
	I1017 18:56:56.563147  497052 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 18:56:56.563278  497052 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1017 18:56:56.563393  497052 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 18:56:56.563506  497052 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 18:56:57.580675  497052 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.017460176s
	I1017 18:56:58.636295  497052 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.073161983s
	I1017 18:57:00.565317  497052 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.002162474s
	I1017 18:57:00.577083  497052 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 18:57:00.588338  497052 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 18:57:00.597436  497052 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 18:57:00.597677  497052 kubeadm.go:318] [mark-control-plane] Marking the node addons-642189 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 18:57:00.605981  497052 kubeadm.go:318] [bootstrap-token] Using token: zu8ikn.cdwz8remj9o7hw3s
	I1017 18:57:00.607636  497052 out.go:252]   - Configuring RBAC rules ...
	I1017 18:57:00.607822  497052 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 18:57:00.611169  497052 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 18:57:00.617020  497052 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 18:57:00.620528  497052 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 18:57:00.623428  497052 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 18:57:00.626491  497052 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 18:57:00.972138  497052 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 18:57:01.389416  497052 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 18:57:01.970957  497052 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 18:57:01.971841  497052 kubeadm.go:318] 
	I1017 18:57:01.971927  497052 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 18:57:01.971940  497052 kubeadm.go:318] 
	I1017 18:57:01.972047  497052 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 18:57:01.972059  497052 kubeadm.go:318] 
	I1017 18:57:01.972093  497052 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 18:57:01.972181  497052 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 18:57:01.972280  497052 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 18:57:01.972301  497052 kubeadm.go:318] 
	I1017 18:57:01.972385  497052 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 18:57:01.972397  497052 kubeadm.go:318] 
	I1017 18:57:01.972465  497052 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 18:57:01.972477  497052 kubeadm.go:318] 
	I1017 18:57:01.972558  497052 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 18:57:01.972641  497052 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 18:57:01.972747  497052 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 18:57:01.972758  497052 kubeadm.go:318] 
	I1017 18:57:01.972864  497052 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 18:57:01.972953  497052 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 18:57:01.972958  497052 kubeadm.go:318] 
	I1017 18:57:01.973064  497052 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token zu8ikn.cdwz8remj9o7hw3s \
	I1017 18:57:01.973201  497052 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ae4b222593b9932ac318f80ad834fe09d4c8ed481133016b5c410bf2757b648e \
	I1017 18:57:01.973229  497052 kubeadm.go:318] 	--control-plane 
	I1017 18:57:01.973237  497052 kubeadm.go:318] 
	I1017 18:57:01.973333  497052 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 18:57:01.973340  497052 kubeadm.go:318] 
	I1017 18:57:01.973444  497052 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token zu8ikn.cdwz8remj9o7hw3s \
	I1017 18:57:01.973586  497052 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ae4b222593b9932ac318f80ad834fe09d4c8ed481133016b5c410bf2757b648e 
	I1017 18:57:01.976079  497052 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1017 18:57:01.976244  497052 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 18:57:01.976272  497052 cni.go:84] Creating CNI manager for ""
	I1017 18:57:01.976286  497052 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 18:57:01.978848  497052 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 18:57:01.980075  497052 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 18:57:01.984608  497052 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 18:57:01.984626  497052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 18:57:01.998020  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 18:57:02.212363  497052 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 18:57:02.212424  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:02.212473  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-642189 minikube.k8s.io/updated_at=2025_10_17T18_57_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d minikube.k8s.io/name=addons-642189 minikube.k8s.io/primary=true
	I1017 18:57:02.222935  497052 ops.go:34] apiserver oom_adj: -16
	I1017 18:57:02.292126  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:02.793222  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:03.292554  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:03.792902  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:04.292613  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:04.792220  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:05.292263  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:05.792943  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:06.293220  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:06.792903  497052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:06.872377  497052 kubeadm.go:1113] duration metric: took 4.660013134s to wait for elevateKubeSystemPrivileges
	I1017 18:57:06.872408  497052 kubeadm.go:402] duration metric: took 14.135475724s to StartCluster
	I1017 18:57:06.872427  497052 settings.go:142] acquiring lock: {Name:mkb8ebc6edbbb6915dd74086f502bcc2721555a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:06.872562  497052 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 18:57:06.873086  497052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/kubeconfig: {Name:mkc99c1a086f83f30612e2820a6063c20b9217b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:06.873343  497052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 18:57:06.873379  497052 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 18:57:06.873470  497052 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1017 18:57:06.873624  497052 addons.go:69] Setting yakd=true in profile "addons-642189"
	I1017 18:57:06.873666  497052 addons.go:238] Setting addon yakd=true in "addons-642189"
	I1017 18:57:06.873674  497052 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:57:06.873736  497052 addons.go:69] Setting gcp-auth=true in profile "addons-642189"
	I1017 18:57:06.873748  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.873759  497052 mustload.go:65] Loading cluster: addons-642189
	I1017 18:57:06.873669  497052 addons.go:69] Setting inspektor-gadget=true in profile "addons-642189"
	I1017 18:57:06.873781  497052 addons.go:238] Setting addon inspektor-gadget=true in "addons-642189"
	I1017 18:57:06.873787  497052 addons.go:69] Setting ingress=true in profile "addons-642189"
	I1017 18:57:06.873813  497052 addons.go:238] Setting addon ingress=true in "addons-642189"
	I1017 18:57:06.873814  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.873866  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.873868  497052 addons.go:69] Setting ingress-dns=true in profile "addons-642189"
	I1017 18:57:06.873895  497052 addons.go:238] Setting addon ingress-dns=true in "addons-642189"
	I1017 18:57:06.873930  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.873985  497052 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:57:06.874148  497052 addons.go:69] Setting cloud-spanner=true in profile "addons-642189"
	I1017 18:57:06.874169  497052 addons.go:238] Setting addon cloud-spanner=true in "addons-642189"
	I1017 18:57:06.874204  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.874344  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.874363  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.874376  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.874381  497052 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-642189"
	I1017 18:57:06.874398  497052 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-642189"
	I1017 18:57:06.874415  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.874420  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.874660  497052 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-642189"
	I1017 18:57:06.874767  497052 addons.go:69] Setting registry-creds=true in profile "addons-642189"
	I1017 18:57:06.874795  497052 addons.go:238] Setting addon registry-creds=true in "addons-642189"
	I1017 18:57:06.874819  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.874843  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.874771  497052 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-642189"
	I1017 18:57:06.875010  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.875015  497052 addons.go:69] Setting default-storageclass=true in profile "addons-642189"
	I1017 18:57:06.875058  497052 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-642189"
	I1017 18:57:06.875095  497052 addons.go:69] Setting storage-provisioner=true in profile "addons-642189"
	I1017 18:57:06.875108  497052 addons.go:238] Setting addon storage-provisioner=true in "addons-642189"
	I1017 18:57:06.875128  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.875246  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.875328  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.875862  497052 out.go:179] * Verifying Kubernetes components...
	I1017 18:57:06.875944  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.876634  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.877017  497052 addons.go:69] Setting volumesnapshots=true in profile "addons-642189"
	I1017 18:57:06.877039  497052 addons.go:238] Setting addon volumesnapshots=true in "addons-642189"
	I1017 18:57:06.877066  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.877528  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.877637  497052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 18:57:06.877721  497052 addons.go:69] Setting metrics-server=true in profile "addons-642189"
	I1017 18:57:06.877739  497052 addons.go:238] Setting addon metrics-server=true in "addons-642189"
	I1017 18:57:06.877750  497052 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-642189"
	I1017 18:57:06.877769  497052 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-642189"
	I1017 18:57:06.877776  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.878061  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.878272  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.878831  497052 addons.go:69] Setting volcano=true in profile "addons-642189"
	I1017 18:57:06.878911  497052 addons.go:238] Setting addon volcano=true in "addons-642189"
	I1017 18:57:06.878945  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.879404  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.879707  497052 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-642189"
	I1017 18:57:06.879734  497052 addons.go:69] Setting registry=true in profile "addons-642189"
	I1017 18:57:06.879748  497052 addons.go:238] Setting addon registry=true in "addons-642189"
	I1017 18:57:06.879754  497052 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-642189"
	I1017 18:57:06.879780  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.879791  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.874363  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.897408  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.897926  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.898047  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.919232  497052 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1017 18:57:06.920916  497052 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1017 18:57:06.920943  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1017 18:57:06.921010  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:06.929739  497052 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1017 18:57:06.931521  497052 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1017 18:57:06.931552  497052 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1017 18:57:06.931635  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:06.932853  497052 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1017 18:57:06.934471  497052 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1017 18:57:06.936749  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1017 18:57:06.936884  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:06.940591  497052 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1017 18:57:06.940860  497052 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1017 18:57:06.942508  497052 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1017 18:57:06.942533  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1017 18:57:06.942601  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:06.943015  497052 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1017 18:57:06.943034  497052 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1017 18:57:06.943206  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:06.945078  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.952255  497052 addons.go:238] Setting addon default-storageclass=true in "addons-642189"
	I1017 18:57:06.952323  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.953224  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:06.961703  497052 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1017 18:57:06.963163  497052 out.go:179]   - Using image docker.io/registry:3.0.0
	I1017 18:57:06.964250  497052 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1017 18:57:06.964518  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1017 18:57:06.965016  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:06.972529  497052 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1017 18:57:06.974789  497052 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1017 18:57:06.974810  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1017 18:57:06.974872  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:06.997260  497052 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-642189"
	I1017 18:57:06.997324  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:06.997622  497052 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 18:57:06.997861  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:07.002754  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.002810  497052 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1017 18:57:07.002857  497052 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 18:57:07.002873  497052 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 18:57:07.002936  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:07.002940  497052 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 18:57:07.002952  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 18:57:07.003014  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:07.004419  497052 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1017 18:57:07.004442  497052 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1017 18:57:07.004501  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:07.009330  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.016524  497052 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	W1017 18:57:07.017881  497052 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1017 18:57:07.022332  497052 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1017 18:57:07.022418  497052 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1017 18:57:07.023826  497052 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 18:57:07.023962  497052 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1017 18:57:07.026785  497052 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 18:57:07.026937  497052 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1017 18:57:07.027184  497052 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1017 18:57:07.029327  497052 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1017 18:57:07.029482  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1017 18:57:07.029740  497052 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1017 18:57:07.029762  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1017 18:57:07.029863  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:07.029870  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:07.033494  497052 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1017 18:57:07.034464  497052 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1017 18:57:07.036340  497052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1017 18:57:07.036362  497052 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1017 18:57:07.036428  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:07.036634  497052 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1017 18:57:07.042309  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.044540  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.046265  497052 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1017 18:57:07.047646  497052 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1017 18:57:07.048972  497052 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1017 18:57:07.049001  497052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1017 18:57:07.049076  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:07.063893  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.066625  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.069757  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.071756  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.075808  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.077102  497052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 18:57:07.078559  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.081097  497052 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1017 18:57:07.082439  497052 out.go:179]   - Using image docker.io/busybox:stable
	I1017 18:57:07.084299  497052 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1017 18:57:07.084506  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1017 18:57:07.084636  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:07.108078  497052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 18:57:07.109571  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.110794  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.115960  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	W1017 18:57:07.118479  497052 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1017 18:57:07.118525  497052 retry.go:31] will retry after 298.894403ms: ssh: handshake failed: EOF
	I1017 18:57:07.125416  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.133712  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:07.210395  497052 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1017 18:57:07.210429  497052 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1017 18:57:07.223514  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1017 18:57:07.224774  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1017 18:57:07.233708  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 18:57:07.234161  497052 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1017 18:57:07.234183  497052 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1017 18:57:07.250830  497052 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1017 18:57:07.250864  497052 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1017 18:57:07.254519  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1017 18:57:07.260703  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1017 18:57:07.273146  497052 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1017 18:57:07.273174  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1017 18:57:07.274451  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1017 18:57:07.277404  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 18:57:07.288678  497052 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1017 18:57:07.288721  497052 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1017 18:57:07.296406  497052 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:07.296437  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1017 18:57:07.311975  497052 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1017 18:57:07.312079  497052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1017 18:57:07.316400  497052 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1017 18:57:07.316539  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1017 18:57:07.329960  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:07.330065  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1017 18:57:07.332272  497052 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1017 18:57:07.332294  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1017 18:57:07.335379  497052 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1017 18:57:07.335403  497052 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1017 18:57:07.350248  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1017 18:57:07.350651  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1017 18:57:07.365424  497052 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1017 18:57:07.365473  497052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1017 18:57:07.390354  497052 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1017 18:57:07.390389  497052 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1017 18:57:07.398787  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1017 18:57:07.405249  497052 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1017 18:57:07.405281  497052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1017 18:57:07.463232  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1017 18:57:07.468750  497052 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1017 18:57:07.468782  497052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1017 18:57:07.497985  497052 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1017 18:57:07.500656  497052 node_ready.go:35] waiting up to 6m0s for node "addons-642189" to be "Ready" ...
	I1017 18:57:07.523654  497052 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1017 18:57:07.523698  497052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1017 18:57:07.589775  497052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1017 18:57:07.589802  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1017 18:57:07.638225  497052 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1017 18:57:07.638338  497052 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1017 18:57:07.678653  497052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1017 18:57:07.678869  497052 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1017 18:57:07.703316  497052 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1017 18:57:07.703432  497052 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1017 18:57:07.776584  497052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1017 18:57:07.776657  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1017 18:57:07.778449  497052 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1017 18:57:07.778473  497052 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1017 18:57:07.817885  497052 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1017 18:57:07.817924  497052 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1017 18:57:07.835494  497052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1017 18:57:07.835521  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1017 18:57:07.853021  497052 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 18:57:07.853056  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1017 18:57:07.870148  497052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1017 18:57:07.870183  497052 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1017 18:57:07.899178  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 18:57:07.915465  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1017 18:57:08.016285  497052 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-642189" context rescaled to 1 replicas
	I1017 18:57:08.515729  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.241231003s)
	I1017 18:57:08.515783  497052 addons.go:479] Verifying addon ingress=true in "addons-642189"
	I1017 18:57:08.515822  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.238387018s)
	I1017 18:57:08.515929  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.185932503s)
	W1017 18:57:08.515968  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:08.515989  497052 retry.go:31] will retry after 374.51806ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:08.515995  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.185905685s)
	I1017 18:57:08.516062  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.165771811s)
	I1017 18:57:08.516085  497052 addons.go:479] Verifying addon registry=true in "addons-642189"
	I1017 18:57:08.516192  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.117371509s)
	I1017 18:57:08.516135  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.16546121s)
	I1017 18:57:08.516286  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.053012277s)
	I1017 18:57:08.516314  497052 addons.go:479] Verifying addon metrics-server=true in "addons-642189"
	I1017 18:57:08.517438  497052 out.go:179] * Verifying ingress addon...
	I1017 18:57:08.518446  497052 out.go:179] * Verifying registry addon...
	I1017 18:57:08.518445  497052 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-642189 service yakd-dashboard -n yakd-dashboard
	
	I1017 18:57:08.520136  497052 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1017 18:57:08.521074  497052 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1017 18:57:08.523652  497052 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1017 18:57:08.523671  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:08.523774  497052 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1017 18:57:08.523794  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:08.891254  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:09.023998  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:09.024271  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:09.072137  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.172899155s)
	W1017 18:57:09.072185  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1017 18:57:09.072213  497052 retry.go:31] will retry after 249.396086ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1017 18:57:09.072406  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.156874885s)
	I1017 18:57:09.072448  497052 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-642189"
	I1017 18:57:09.073971  497052 out.go:179] * Verifying csi-hostpath-driver addon...
	I1017 18:57:09.076827  497052 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1017 18:57:09.083764  497052 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1017 18:57:09.083786  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:09.322421  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1017 18:57:09.504799  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:09.524220  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:09.524410  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1017 18:57:09.534922  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:09.534956  497052 retry.go:31] will retry after 539.222379ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:09.624635  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:10.025270  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:10.025472  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:10.074521  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:10.126795  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:10.524482  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:10.524664  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:10.625748  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:11.024171  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:11.024337  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:11.080213  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:11.524312  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:11.524432  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:11.625251  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:11.865006  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.542533999s)
	I1017 18:57:11.865163  497052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.790583534s)
	W1017 18:57:11.865205  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:11.865223  497052 retry.go:31] will retry after 428.934292ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 18:57:12.004365  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:12.023770  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:12.023808  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:12.080967  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:12.295308  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:12.524587  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:12.524761  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:12.625464  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:57:12.858972  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:12.859016  497052 retry.go:31] will retry after 936.652695ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:13.023864  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:13.024037  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:13.080884  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:13.524452  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:13.524625  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:13.625057  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:13.796107  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1017 18:57:14.004844  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:14.025283  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:14.025448  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:14.080375  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:57:14.362485  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:14.362520  497052 retry.go:31] will retry after 1.406793949s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:14.524067  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:14.524160  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:14.559567  497052 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1017 18:57:14.559633  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:14.579233  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:14.625109  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:14.684495  497052 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1017 18:57:14.698651  497052 addons.go:238] Setting addon gcp-auth=true in "addons-642189"
	I1017 18:57:14.698731  497052 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:57:14.699124  497052 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:57:14.718346  497052 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1017 18:57:14.718403  497052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:57:14.737635  497052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:57:14.833229  497052 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 18:57:14.834445  497052 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1017 18:57:14.835499  497052 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1017 18:57:14.835518  497052 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1017 18:57:14.849935  497052 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1017 18:57:14.849967  497052 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1017 18:57:14.863559  497052 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1017 18:57:14.863585  497052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1017 18:57:14.877534  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1017 18:57:15.023466  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:15.024197  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:15.080355  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:15.207628  497052 addons.go:479] Verifying addon gcp-auth=true in "addons-642189"
	I1017 18:57:15.209936  497052 out.go:179] * Verifying gcp-auth addon...
	I1017 18:57:15.211922  497052 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1017 18:57:15.214515  497052 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1017 18:57:15.214540  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:15.523467  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:15.523811  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:15.580758  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:15.715890  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:15.770035  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:16.023838  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:16.024336  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:16.080438  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:16.216000  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:16.338218  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:16.338253  497052 retry.go:31] will retry after 2.303801595s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 18:57:16.504159  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:16.524422  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:16.524619  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:16.580711  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:16.715414  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:17.023865  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:17.023968  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:17.079798  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:17.216039  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:17.524165  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:17.524193  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:17.580090  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:17.715085  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:18.023971  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:18.024144  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:18.080129  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:18.215498  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:18.504486  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:18.524330  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:18.524555  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:18.580587  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:18.642741  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:18.715951  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:19.024798  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:19.024882  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:19.080294  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:19.215790  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:19.227321  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:19.227354  497052 retry.go:31] will retry after 3.672326615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:19.524111  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:19.524564  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:19.580795  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:19.715779  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:20.023847  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:20.024186  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:20.080231  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:20.216212  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:20.524711  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:20.524839  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:20.580856  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:20.715801  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:21.003671  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:21.023927  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:21.024167  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:21.080245  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:21.215742  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:21.524190  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:21.524193  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:21.579971  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:21.715317  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:22.023848  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:22.024229  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:22.080432  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:22.215614  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:22.524461  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:22.524537  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:22.580766  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:22.715546  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:22.900896  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1017 18:57:23.004894  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:23.024157  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:23.024312  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:23.080071  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:23.215146  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:23.477571  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:23.477610  497052 retry.go:31] will retry after 4.189491628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:23.524141  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:23.524182  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:23.580394  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:23.715181  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:24.023821  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:24.024052  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:24.080012  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:24.215521  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:24.523987  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:24.524128  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:24.580477  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:24.715505  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:25.023819  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:25.023936  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:25.080658  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:25.215752  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:25.504183  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:25.524016  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:25.524303  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:25.580200  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:25.714994  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:26.024295  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:26.024438  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:26.080210  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:26.215880  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:26.524463  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:26.524738  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:26.580757  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:26.715596  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:27.023921  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:27.023947  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:27.080925  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:27.216164  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:27.504598  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:27.523499  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:27.523924  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:27.580107  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:27.668243  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:27.715749  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:28.024160  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:28.024263  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:28.080326  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:28.215167  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:28.240817  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:28.240856  497052 retry.go:31] will retry after 7.578900836s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:28.524716  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:28.524737  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:28.580418  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:28.715131  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:29.024361  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:29.024361  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:29.080145  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:29.215004  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:29.523591  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:29.523928  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:29.579844  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:29.716107  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:30.003962  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:30.023921  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:30.024112  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:30.080909  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:30.216485  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:30.524412  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:30.524621  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:30.580228  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:30.715284  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:31.023877  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:31.024146  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:31.080099  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:31.215039  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:31.523839  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:31.524125  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:31.580065  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:31.714930  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:32.024406  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:32.024482  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:32.125493  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:32.215817  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:32.503938  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:32.524589  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:32.524654  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:32.580605  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:32.715274  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:33.023722  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:33.024149  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:33.080246  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:33.215820  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:33.524333  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:33.524371  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:33.580398  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:33.715635  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:34.023932  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:34.024402  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:34.080367  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:34.215529  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:34.504708  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:34.524093  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:34.524098  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:34.580333  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:34.715532  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:35.024123  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:35.024351  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:35.080239  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:35.215351  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:35.523629  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:35.523935  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:35.580045  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:35.715612  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:35.820750  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:36.023965  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:36.024021  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:36.080801  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:36.215389  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:36.403637  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:36.403667  497052 retry.go:31] will retry after 9.094163433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:36.524205  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:36.524424  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:36.580173  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:36.715363  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:37.003762  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:37.023878  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:37.023966  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:37.080872  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:37.216214  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:37.523970  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:37.524241  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:37.580232  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:37.714997  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:38.023968  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:38.024177  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:38.079926  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:38.216049  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:38.524034  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:38.524105  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:38.579976  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:38.714937  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:39.023399  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:39.024311  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:39.080235  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:39.216131  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:39.504158  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:39.523990  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:39.524189  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:39.580047  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:39.715257  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:40.023393  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:40.023976  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:40.080084  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:40.215485  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:40.523925  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:40.524013  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:40.580323  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:40.715395  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:41.024104  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:41.024245  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:41.080178  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:41.215309  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:41.504619  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:41.524016  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:41.524317  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:41.580357  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:41.715073  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:42.023879  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:42.023903  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:42.080873  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:42.216511  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:42.524217  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:42.524313  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:42.580369  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:42.715435  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:43.023415  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:43.023776  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:43.080760  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:43.215974  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:43.523634  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:43.523962  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:43.579919  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:43.715519  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:44.004772  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:44.023853  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:44.024171  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:44.080259  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:44.215437  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:44.524032  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:44.524075  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:44.580173  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:44.714965  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:45.023916  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:45.023965  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:45.080809  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:45.215601  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:45.498918  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:45.523981  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:45.524351  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:45.580212  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:45.715009  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:46.023052  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:46.023708  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1017 18:57:46.062320  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:46.062355  497052 retry.go:31] will retry after 12.563757691s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:46.080166  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:46.215258  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:46.504513  497052 node_ready.go:57] node "addons-642189" has "Ready":"False" status (will retry)
	I1017 18:57:46.523648  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:46.524135  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:46.580068  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:46.714843  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:47.024148  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:47.024242  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:47.079586  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:47.215771  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:47.502971  497052 node_ready.go:49] node "addons-642189" is "Ready"
	I1017 18:57:47.503008  497052 node_ready.go:38] duration metric: took 40.002301943s for node "addons-642189" to be "Ready" ...
	I1017 18:57:47.503027  497052 api_server.go:52] waiting for apiserver process to appear ...
	I1017 18:57:47.503088  497052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 18:57:47.521624  497052 api_server.go:72] duration metric: took 40.648204445s to wait for apiserver process to appear ...
	I1017 18:57:47.521653  497052 api_server.go:88] waiting for apiserver healthz status ...
	I1017 18:57:47.521676  497052 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1017 18:57:47.523626  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:47.523907  497052 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1017 18:57:47.523928  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:47.528891  497052 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1017 18:57:47.529870  497052 api_server.go:141] control plane version: v1.34.1
	I1017 18:57:47.529904  497052 api_server.go:131] duration metric: took 8.243043ms to wait for apiserver health ...
	I1017 18:57:47.529916  497052 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 18:57:47.534863  497052 system_pods.go:59] 20 kube-system pods found
	I1017 18:57:47.534901  497052 system_pods.go:61] "amd-gpu-device-plugin-t48xm" [3156d3f4-4196-443e-86ea-eb10fdc988bc] Pending
	I1017 18:57:47.534916  497052 system_pods.go:61] "coredns-66bc5c9577-9qzb6" [fac124c4-9636-4867-b8d6-b85ace3157be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 18:57:47.534923  497052 system_pods.go:61] "csi-hostpath-attacher-0" [ff1154c7-8dcf-4784-aeb0-4b7f71b610d8] Pending
	I1017 18:57:47.534935  497052 system_pods.go:61] "csi-hostpath-resizer-0" [5848a585-6545-4769-aef8-eece82ad7a3e] Pending
	I1017 18:57:47.534940  497052 system_pods.go:61] "csi-hostpathplugin-5kdtq" [51ff254c-6eca-4206-bc0d-d45c02ee3e01] Pending
	I1017 18:57:47.534946  497052 system_pods.go:61] "etcd-addons-642189" [19dd00f5-11cf-4bcb-8d15-81fdee0122ac] Running
	I1017 18:57:47.534961  497052 system_pods.go:61] "kindnet-6gk89" [fa4d48ce-32f6-4a29-a643-adf89425fb2d] Running
	I1017 18:57:47.534966  497052 system_pods.go:61] "kube-apiserver-addons-642189" [1416f756-9377-46ae-8c1e-89cad4fc1c3d] Running
	I1017 18:57:47.534978  497052 system_pods.go:61] "kube-controller-manager-addons-642189" [8db3ab0c-4f17-48cc-9e53-5522c8f070d5] Running
	I1017 18:57:47.534988  497052 system_pods.go:61] "kube-ingress-dns-minikube" [f8388279-4ec9-4e98-9cd9-b8d496b5d57a] Pending
	I1017 18:57:47.534992  497052 system_pods.go:61] "kube-proxy-n4pk6" [72dac253-09fc-4aa9-aed7-196eed4d49e7] Running
	I1017 18:57:47.535001  497052 system_pods.go:61] "kube-scheduler-addons-642189" [26a48cb9-6a80-4c21-b965-a2dec20ca37d] Running
	I1017 18:57:47.535009  497052 system_pods.go:61] "metrics-server-85b7d694d7-7d6xn" [3877854d-d5e2-4181-ba78-988a54712111] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:57:47.535020  497052 system_pods.go:61] "nvidia-device-plugin-daemonset-5272k" [f201ab4f-abad-46f2-a109-95004c7250f7] Pending
	I1017 18:57:47.535031  497052 system_pods.go:61] "registry-6b586f9694-gfg4q" [f3780320-4513-4f0c-a613-2e6dae9f1050] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 18:57:47.535038  497052 system_pods.go:61] "registry-creds-764b6fb674-wpqx2" [ff764293-9993-42e2-aed2-de34ffce5c63] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 18:57:47.535044  497052 system_pods.go:61] "registry-proxy-7wchq" [ba24cd6f-ac09-4d7a-8504-fc72367cd2c3] Pending
	I1017 18:57:47.535053  497052 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qxcgb" [907c8bda-b107-4358-b274-36307a0e95d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:57:47.535059  497052 system_pods.go:61] "snapshot-controller-7d9fbc56b8-x4f9r" [8bad8697-4458-4007-beb2-6ee425032923] Pending
	I1017 18:57:47.535067  497052 system_pods.go:61] "storage-provisioner" [6b2b7583-da33-4e05-bf2a-75ac8e369265] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 18:57:47.535076  497052 system_pods.go:74] duration metric: took 5.152079ms to wait for pod list to return data ...
	I1017 18:57:47.535100  497052 default_sa.go:34] waiting for default service account to be created ...
	I1017 18:57:47.537220  497052 default_sa.go:45] found service account: "default"
	I1017 18:57:47.537244  497052 default_sa.go:55] duration metric: took 2.136658ms for default service account to be created ...
	I1017 18:57:47.537254  497052 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 18:57:47.542497  497052 system_pods.go:86] 20 kube-system pods found
	I1017 18:57:47.542527  497052 system_pods.go:89] "amd-gpu-device-plugin-t48xm" [3156d3f4-4196-443e-86ea-eb10fdc988bc] Pending
	I1017 18:57:47.542536  497052 system_pods.go:89] "coredns-66bc5c9577-9qzb6" [fac124c4-9636-4867-b8d6-b85ace3157be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 18:57:47.542541  497052 system_pods.go:89] "csi-hostpath-attacher-0" [ff1154c7-8dcf-4784-aeb0-4b7f71b610d8] Pending
	I1017 18:57:47.542547  497052 system_pods.go:89] "csi-hostpath-resizer-0" [5848a585-6545-4769-aef8-eece82ad7a3e] Pending
	I1017 18:57:47.542550  497052 system_pods.go:89] "csi-hostpathplugin-5kdtq" [51ff254c-6eca-4206-bc0d-d45c02ee3e01] Pending
	I1017 18:57:47.542553  497052 system_pods.go:89] "etcd-addons-642189" [19dd00f5-11cf-4bcb-8d15-81fdee0122ac] Running
	I1017 18:57:47.542556  497052 system_pods.go:89] "kindnet-6gk89" [fa4d48ce-32f6-4a29-a643-adf89425fb2d] Running
	I1017 18:57:47.542560  497052 system_pods.go:89] "kube-apiserver-addons-642189" [1416f756-9377-46ae-8c1e-89cad4fc1c3d] Running
	I1017 18:57:47.542565  497052 system_pods.go:89] "kube-controller-manager-addons-642189" [8db3ab0c-4f17-48cc-9e53-5522c8f070d5] Running
	I1017 18:57:47.542572  497052 system_pods.go:89] "kube-ingress-dns-minikube" [f8388279-4ec9-4e98-9cd9-b8d496b5d57a] Pending
	I1017 18:57:47.542578  497052 system_pods.go:89] "kube-proxy-n4pk6" [72dac253-09fc-4aa9-aed7-196eed4d49e7] Running
	I1017 18:57:47.542584  497052 system_pods.go:89] "kube-scheduler-addons-642189" [26a48cb9-6a80-4c21-b965-a2dec20ca37d] Running
	I1017 18:57:47.542596  497052 system_pods.go:89] "metrics-server-85b7d694d7-7d6xn" [3877854d-d5e2-4181-ba78-988a54712111] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:57:47.542604  497052 system_pods.go:89] "nvidia-device-plugin-daemonset-5272k" [f201ab4f-abad-46f2-a109-95004c7250f7] Pending
	I1017 18:57:47.542612  497052 system_pods.go:89] "registry-6b586f9694-gfg4q" [f3780320-4513-4f0c-a613-2e6dae9f1050] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 18:57:47.542621  497052 system_pods.go:89] "registry-creds-764b6fb674-wpqx2" [ff764293-9993-42e2-aed2-de34ffce5c63] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 18:57:47.542625  497052 system_pods.go:89] "registry-proxy-7wchq" [ba24cd6f-ac09-4d7a-8504-fc72367cd2c3] Pending
	I1017 18:57:47.542635  497052 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qxcgb" [907c8bda-b107-4358-b274-36307a0e95d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:57:47.542646  497052 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x4f9r" [8bad8697-4458-4007-beb2-6ee425032923] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:57:47.542654  497052 system_pods.go:89] "storage-provisioner" [6b2b7583-da33-4e05-bf2a-75ac8e369265] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 18:57:47.542715  497052 retry.go:31] will retry after 269.224857ms: missing components: kube-dns
	I1017 18:57:47.589834  497052 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1017 18:57:47.589863  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:47.715757  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:47.818157  497052 system_pods.go:86] 20 kube-system pods found
	I1017 18:57:47.818367  497052 system_pods.go:89] "amd-gpu-device-plugin-t48xm" [3156d3f4-4196-443e-86ea-eb10fdc988bc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1017 18:57:47.818390  497052 system_pods.go:89] "coredns-66bc5c9577-9qzb6" [fac124c4-9636-4867-b8d6-b85ace3157be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 18:57:47.818404  497052 system_pods.go:89] "csi-hostpath-attacher-0" [ff1154c7-8dcf-4784-aeb0-4b7f71b610d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 18:57:47.818415  497052 system_pods.go:89] "csi-hostpath-resizer-0" [5848a585-6545-4769-aef8-eece82ad7a3e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 18:57:47.818424  497052 system_pods.go:89] "csi-hostpathplugin-5kdtq" [51ff254c-6eca-4206-bc0d-d45c02ee3e01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 18:57:47.818435  497052 system_pods.go:89] "etcd-addons-642189" [19dd00f5-11cf-4bcb-8d15-81fdee0122ac] Running
	I1017 18:57:47.818442  497052 system_pods.go:89] "kindnet-6gk89" [fa4d48ce-32f6-4a29-a643-adf89425fb2d] Running
	I1017 18:57:47.818448  497052 system_pods.go:89] "kube-apiserver-addons-642189" [1416f756-9377-46ae-8c1e-89cad4fc1c3d] Running
	I1017 18:57:47.818456  497052 system_pods.go:89] "kube-controller-manager-addons-642189" [8db3ab0c-4f17-48cc-9e53-5522c8f070d5] Running
	I1017 18:57:47.818465  497052 system_pods.go:89] "kube-ingress-dns-minikube" [f8388279-4ec9-4e98-9cd9-b8d496b5d57a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 18:57:47.818473  497052 system_pods.go:89] "kube-proxy-n4pk6" [72dac253-09fc-4aa9-aed7-196eed4d49e7] Running
	I1017 18:57:47.818480  497052 system_pods.go:89] "kube-scheduler-addons-642189" [26a48cb9-6a80-4c21-b965-a2dec20ca37d] Running
	I1017 18:57:47.818498  497052 system_pods.go:89] "metrics-server-85b7d694d7-7d6xn" [3877854d-d5e2-4181-ba78-988a54712111] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:57:47.818507  497052 system_pods.go:89] "nvidia-device-plugin-daemonset-5272k" [f201ab4f-abad-46f2-a109-95004c7250f7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 18:57:47.818516  497052 system_pods.go:89] "registry-6b586f9694-gfg4q" [f3780320-4513-4f0c-a613-2e6dae9f1050] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 18:57:47.818528  497052 system_pods.go:89] "registry-creds-764b6fb674-wpqx2" [ff764293-9993-42e2-aed2-de34ffce5c63] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 18:57:47.818552  497052 system_pods.go:89] "registry-proxy-7wchq" [ba24cd6f-ac09-4d7a-8504-fc72367cd2c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 18:57:47.818564  497052 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qxcgb" [907c8bda-b107-4358-b274-36307a0e95d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:57:47.818575  497052 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x4f9r" [8bad8697-4458-4007-beb2-6ee425032923] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:57:47.818584  497052 system_pods.go:89] "storage-provisioner" [6b2b7583-da33-4e05-bf2a-75ac8e369265] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 18:57:47.818609  497052 retry.go:31] will retry after 250.033006ms: missing components: kube-dns
	I1017 18:57:48.023864  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:48.023905  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:48.073068  497052 system_pods.go:86] 20 kube-system pods found
	I1017 18:57:48.073106  497052 system_pods.go:89] "amd-gpu-device-plugin-t48xm" [3156d3f4-4196-443e-86ea-eb10fdc988bc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1017 18:57:48.073114  497052 system_pods.go:89] "coredns-66bc5c9577-9qzb6" [fac124c4-9636-4867-b8d6-b85ace3157be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 18:57:48.073122  497052 system_pods.go:89] "csi-hostpath-attacher-0" [ff1154c7-8dcf-4784-aeb0-4b7f71b610d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 18:57:48.073128  497052 system_pods.go:89] "csi-hostpath-resizer-0" [5848a585-6545-4769-aef8-eece82ad7a3e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 18:57:48.073135  497052 system_pods.go:89] "csi-hostpathplugin-5kdtq" [51ff254c-6eca-4206-bc0d-d45c02ee3e01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 18:57:48.073139  497052 system_pods.go:89] "etcd-addons-642189" [19dd00f5-11cf-4bcb-8d15-81fdee0122ac] Running
	I1017 18:57:48.073143  497052 system_pods.go:89] "kindnet-6gk89" [fa4d48ce-32f6-4a29-a643-adf89425fb2d] Running
	I1017 18:57:48.073147  497052 system_pods.go:89] "kube-apiserver-addons-642189" [1416f756-9377-46ae-8c1e-89cad4fc1c3d] Running
	I1017 18:57:48.073150  497052 system_pods.go:89] "kube-controller-manager-addons-642189" [8db3ab0c-4f17-48cc-9e53-5522c8f070d5] Running
	I1017 18:57:48.073155  497052 system_pods.go:89] "kube-ingress-dns-minikube" [f8388279-4ec9-4e98-9cd9-b8d496b5d57a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 18:57:48.073158  497052 system_pods.go:89] "kube-proxy-n4pk6" [72dac253-09fc-4aa9-aed7-196eed4d49e7] Running
	I1017 18:57:48.073162  497052 system_pods.go:89] "kube-scheduler-addons-642189" [26a48cb9-6a80-4c21-b965-a2dec20ca37d] Running
	I1017 18:57:48.073167  497052 system_pods.go:89] "metrics-server-85b7d694d7-7d6xn" [3877854d-d5e2-4181-ba78-988a54712111] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:57:48.073176  497052 system_pods.go:89] "nvidia-device-plugin-daemonset-5272k" [f201ab4f-abad-46f2-a109-95004c7250f7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 18:57:48.073181  497052 system_pods.go:89] "registry-6b586f9694-gfg4q" [f3780320-4513-4f0c-a613-2e6dae9f1050] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 18:57:48.073190  497052 system_pods.go:89] "registry-creds-764b6fb674-wpqx2" [ff764293-9993-42e2-aed2-de34ffce5c63] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 18:57:48.073196  497052 system_pods.go:89] "registry-proxy-7wchq" [ba24cd6f-ac09-4d7a-8504-fc72367cd2c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 18:57:48.073201  497052 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qxcgb" [907c8bda-b107-4358-b274-36307a0e95d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:57:48.073208  497052 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x4f9r" [8bad8697-4458-4007-beb2-6ee425032923] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:57:48.073213  497052 system_pods.go:89] "storage-provisioner" [6b2b7583-da33-4e05-bf2a-75ac8e369265] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 18:57:48.073230  497052 retry.go:31] will retry after 463.707569ms: missing components: kube-dns
	I1017 18:57:48.080096  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:48.215550  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:48.524793  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:48.525000  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:48.544223  497052 system_pods.go:86] 20 kube-system pods found
	I1017 18:57:48.544273  497052 system_pods.go:89] "amd-gpu-device-plugin-t48xm" [3156d3f4-4196-443e-86ea-eb10fdc988bc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1017 18:57:48.544283  497052 system_pods.go:89] "coredns-66bc5c9577-9qzb6" [fac124c4-9636-4867-b8d6-b85ace3157be] Running
	I1017 18:57:48.544304  497052 system_pods.go:89] "csi-hostpath-attacher-0" [ff1154c7-8dcf-4784-aeb0-4b7f71b610d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 18:57:48.544320  497052 system_pods.go:89] "csi-hostpath-resizer-0" [5848a585-6545-4769-aef8-eece82ad7a3e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 18:57:48.544338  497052 system_pods.go:89] "csi-hostpathplugin-5kdtq" [51ff254c-6eca-4206-bc0d-d45c02ee3e01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 18:57:48.544345  497052 system_pods.go:89] "etcd-addons-642189" [19dd00f5-11cf-4bcb-8d15-81fdee0122ac] Running
	I1017 18:57:48.544356  497052 system_pods.go:89] "kindnet-6gk89" [fa4d48ce-32f6-4a29-a643-adf89425fb2d] Running
	I1017 18:57:48.544363  497052 system_pods.go:89] "kube-apiserver-addons-642189" [1416f756-9377-46ae-8c1e-89cad4fc1c3d] Running
	I1017 18:57:48.544369  497052 system_pods.go:89] "kube-controller-manager-addons-642189" [8db3ab0c-4f17-48cc-9e53-5522c8f070d5] Running
	I1017 18:57:48.544382  497052 system_pods.go:89] "kube-ingress-dns-minikube" [f8388279-4ec9-4e98-9cd9-b8d496b5d57a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 18:57:48.544388  497052 system_pods.go:89] "kube-proxy-n4pk6" [72dac253-09fc-4aa9-aed7-196eed4d49e7] Running
	I1017 18:57:48.544395  497052 system_pods.go:89] "kube-scheduler-addons-642189" [26a48cb9-6a80-4c21-b965-a2dec20ca37d] Running
	I1017 18:57:48.544403  497052 system_pods.go:89] "metrics-server-85b7d694d7-7d6xn" [3877854d-d5e2-4181-ba78-988a54712111] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:57:48.544417  497052 system_pods.go:89] "nvidia-device-plugin-daemonset-5272k" [f201ab4f-abad-46f2-a109-95004c7250f7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 18:57:48.544427  497052 system_pods.go:89] "registry-6b586f9694-gfg4q" [f3780320-4513-4f0c-a613-2e6dae9f1050] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 18:57:48.544441  497052 system_pods.go:89] "registry-creds-764b6fb674-wpqx2" [ff764293-9993-42e2-aed2-de34ffce5c63] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 18:57:48.544456  497052 system_pods.go:89] "registry-proxy-7wchq" [ba24cd6f-ac09-4d7a-8504-fc72367cd2c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 18:57:48.544477  497052 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qxcgb" [907c8bda-b107-4358-b274-36307a0e95d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:57:48.544490  497052 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x4f9r" [8bad8697-4458-4007-beb2-6ee425032923] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:57:48.544497  497052 system_pods.go:89] "storage-provisioner" [6b2b7583-da33-4e05-bf2a-75ac8e369265] Running
	I1017 18:57:48.544510  497052 system_pods.go:126] duration metric: took 1.007247909s to wait for k8s-apps to be running ...
	I1017 18:57:48.544525  497052 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 18:57:48.544594  497052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 18:57:48.560799  497052 system_svc.go:56] duration metric: took 16.260831ms WaitForService to wait for kubelet
	I1017 18:57:48.560848  497052 kubeadm.go:586] duration metric: took 41.687432721s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 18:57:48.560887  497052 node_conditions.go:102] verifying NodePressure condition ...
	I1017 18:57:48.564184  497052 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 18:57:48.564213  497052 node_conditions.go:123] node cpu capacity is 8
	I1017 18:57:48.564228  497052 node_conditions.go:105] duration metric: took 3.337392ms to run NodePressure ...
	I1017 18:57:48.564242  497052 start.go:241] waiting for startup goroutines ...
	I1017 18:57:48.580734  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:48.715507  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:49.024318  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:49.024668  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:49.081142  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:49.216159  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:49.524364  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:49.524391  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:49.580811  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:49.715632  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:50.024612  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:50.024620  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:50.081464  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:50.215831  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:50.524215  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:50.524445  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:50.581147  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:50.715396  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:51.024071  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:51.024313  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:51.081196  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:51.215324  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:51.524319  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:51.524452  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:51.580623  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:51.714944  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:52.024668  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:52.024807  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:52.080957  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:52.215939  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:52.524742  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:52.524864  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:52.625023  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:52.715548  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:53.023855  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:53.024642  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:53.081152  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:53.216236  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:53.524260  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:53.524565  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:53.581097  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:53.715049  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:54.025141  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:54.025408  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:54.080570  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:54.215377  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:54.523636  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:54.524141  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:54.580317  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:54.715090  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:55.024006  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:55.024162  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:55.080774  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:55.216564  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:55.525058  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:55.525089  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:55.580313  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:55.714648  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:56.086941  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:56.087382  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:56.087559  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:56.215588  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:56.524428  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:56.524453  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:56.580309  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:56.714854  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:57.024303  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:57.024350  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:57.081317  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:57.215502  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:57.524142  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:57.524233  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:57.580869  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:57.715740  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:58.024382  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:58.024539  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:58.080946  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:58.216425  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:58.524609  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:58.524630  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:58.624973  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:58.626990  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:58.725716  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:59.023662  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:59.024322  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:59.080795  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:57:59.199489  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:59.199521  497052 retry.go:31] will retry after 25.11020404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:59.215207  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:59.526666  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:59.526867  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:59.627142  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:59.715564  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:00.024935  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:00.025115  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:00.081298  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:00.216454  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:00.524303  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:00.524339  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:00.581263  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:00.715417  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:01.024280  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:01.024599  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:01.081198  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:01.215639  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:01.524068  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:01.524430  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:01.581138  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:01.714646  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:02.024425  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:02.024537  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:02.081428  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:02.216144  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:02.524029  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:02.524050  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:02.580648  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:02.715608  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:03.025042  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:03.025065  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:03.081490  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:03.215752  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:03.524103  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:03.524318  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:03.581819  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:03.715616  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:04.024871  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:04.024924  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:04.081122  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:04.216889  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:04.524591  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:04.524604  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:04.581069  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:04.715472  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:05.024597  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:05.024637  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:05.081066  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:05.215075  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:05.523902  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:05.523934  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:05.581295  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:05.716290  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:06.024179  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:06.024467  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:06.080523  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:06.215971  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:06.524356  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:06.524588  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:06.581481  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:06.714843  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:07.024176  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:07.024231  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:07.080960  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:07.216581  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:07.524002  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:07.524170  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:07.580602  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:07.715066  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:08.023806  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:08.024461  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:08.080731  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:08.216258  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:08.636458  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:08.636596  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:08.636658  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:08.738711  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:09.023868  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:09.023883  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:09.080963  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:09.216107  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:09.525073  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:09.525131  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:09.581666  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:09.716459  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:10.027416  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:10.028729  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:10.085163  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:10.215722  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:10.525679  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:10.525743  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:10.582132  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:10.715056  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:11.024390  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:11.024397  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:11.081037  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:11.215790  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:11.524787  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:11.527505  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:11.581459  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:11.715345  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:12.023968  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:12.024024  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:12.081593  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:12.216159  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:12.524501  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:12.524859  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:12.581381  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:12.715192  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:13.024163  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:13.024354  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:13.080943  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:13.216277  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:13.524056  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:13.524267  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:13.580654  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:13.715865  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:14.024630  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:14.024677  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:14.081326  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:14.216132  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:14.524283  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:14.524447  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:14.581268  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:14.714888  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:15.024379  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:15.024435  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:15.080975  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:15.215395  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:15.535661  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:15.535709  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:15.602959  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:15.732588  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:16.024147  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:16.024274  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:16.080876  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:16.215890  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:16.523508  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:16.523575  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:16.580950  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:16.715473  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:17.024150  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:17.024384  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:17.081049  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:17.216306  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:17.523481  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:17.524015  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:17.580010  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:17.714626  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:18.024187  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:18.024242  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:18.080859  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:18.216442  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:18.523705  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:18.523802  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:18.581396  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:18.714969  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:19.024763  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:19.024774  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:19.081092  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:19.215232  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:19.524666  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:19.525441  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:19.625362  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:19.715298  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:20.025401  497052 kapi.go:107] duration metric: took 1m11.504323467s to wait for kubernetes.io/minikube-addons=registry ...
	I1017 18:58:20.025480  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:20.081187  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:20.216271  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:20.523714  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:20.628248  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:20.717128  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:21.025181  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:21.080494  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:21.215705  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:21.524658  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:21.580861  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:21.715841  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:22.024816  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:22.082581  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:22.216137  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:22.523798  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:22.580346  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:22.714773  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:23.024331  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:23.080872  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:23.216662  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:23.525201  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:23.580758  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:23.715856  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:24.029054  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:24.082235  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:24.217536  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:24.311024  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:58:24.525578  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:24.588801  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:24.731261  497052 kapi.go:107] duration metric: took 1m9.519331114s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1017 18:58:24.798993  497052 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-642189 cluster.
	I1017 18:58:24.822247  497052 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1017 18:58:24.844564  497052 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1017 18:58:25.025630  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:25.082643  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:25.157473  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:25.157512  497052 retry.go:31] will retry after 41.701288149s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:25.524242  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:25.581005  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:26.024837  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:26.081256  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:26.523996  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:26.581143  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:27.024258  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:27.081001  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:27.523880  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:27.581757  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:28.023725  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:28.081352  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:28.524577  497052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:28.596367  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:29.024420  497052 kapi.go:107] duration metric: took 1m20.50428445s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1017 18:58:29.080662  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:29.581229  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:30.081310  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:30.581187  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:31.081450  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:31.581312  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:32.080918  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:32.581210  497052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:33.080838  497052 kapi.go:107] duration metric: took 1m24.004007789s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1017 18:59:06.862175  497052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1017 18:59:07.424178  497052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 18:59:07.424316  497052 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1017 18:59:07.426510  497052 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, cloud-spanner, ingress-dns, default-storageclass, storage-provisioner, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1017 18:59:07.427821  497052 addons.go:514] duration metric: took 2m0.554363307s for enable addons: enabled=[registry-creds amd-gpu-device-plugin cloud-spanner ingress-dns default-storageclass storage-provisioner nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1017 18:59:07.427878  497052 start.go:246] waiting for cluster config update ...
	I1017 18:59:07.427905  497052 start.go:255] writing updated cluster config ...
	I1017 18:59:07.428260  497052 ssh_runner.go:195] Run: rm -f paused
	I1017 18:59:07.432549  497052 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 18:59:07.436954  497052 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9qzb6" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:07.441754  497052 pod_ready.go:94] pod "coredns-66bc5c9577-9qzb6" is "Ready"
	I1017 18:59:07.441785  497052 pod_ready.go:86] duration metric: took 4.804584ms for pod "coredns-66bc5c9577-9qzb6" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:07.443790  497052 pod_ready.go:83] waiting for pod "etcd-addons-642189" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:07.447807  497052 pod_ready.go:94] pod "etcd-addons-642189" is "Ready"
	I1017 18:59:07.447829  497052 pod_ready.go:86] duration metric: took 4.018226ms for pod "etcd-addons-642189" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:07.449735  497052 pod_ready.go:83] waiting for pod "kube-apiserver-addons-642189" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:07.453604  497052 pod_ready.go:94] pod "kube-apiserver-addons-642189" is "Ready"
	I1017 18:59:07.453626  497052 pod_ready.go:86] duration metric: took 3.871056ms for pod "kube-apiserver-addons-642189" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:07.455589  497052 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-642189" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:07.836538  497052 pod_ready.go:94] pod "kube-controller-manager-addons-642189" is "Ready"
	I1017 18:59:07.836568  497052 pod_ready.go:86] duration metric: took 380.960631ms for pod "kube-controller-manager-addons-642189" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:08.036829  497052 pod_ready.go:83] waiting for pod "kube-proxy-n4pk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:08.437708  497052 pod_ready.go:94] pod "kube-proxy-n4pk6" is "Ready"
	I1017 18:59:08.437739  497052 pod_ready.go:86] duration metric: took 400.882008ms for pod "kube-proxy-n4pk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:08.637645  497052 pod_ready.go:83] waiting for pod "kube-scheduler-addons-642189" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:09.037213  497052 pod_ready.go:94] pod "kube-scheduler-addons-642189" is "Ready"
	I1017 18:59:09.037242  497052 pod_ready.go:86] duration metric: took 399.569767ms for pod "kube-scheduler-addons-642189" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:09.037254  497052 pod_ready.go:40] duration metric: took 1.604669397s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 18:59:09.085722  497052 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 18:59:09.087586  497052 out.go:179] * Done! kubectl is now configured to use "addons-642189" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 18:59:01 addons-642189 crio[766]: time="2025-10-17T18:59:01.535197304Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 18:59:01 addons-642189 crio[766]: time="2025-10-17T18:59:01.535248995Z" level=info msg="Removed pod sandbox: f916396ee7e2b6ff5870d9d7c9bf2823cfdec337abc2e692fee531c6b540cbe5" id=852e928f-3bae-4179-89f1-081f375e4ca6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 17 18:59:09 addons-642189 crio[766]: time="2025-10-17T18:59:09.908472206Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7aae1042-5de6-4d9c-8da4-62c6901686f2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 18:59:09 addons-642189 crio[766]: time="2025-10-17T18:59:09.908583026Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 18:59:09 addons-642189 crio[766]: time="2025-10-17T18:59:09.914542965Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0f6149e8c6f6ac05b41d0672fc146290c3436fdd19ea9067f149a3f96b32f840 UID:11e3a0a3-e413-4307-8a33-7461887a2188 NetNS:/var/run/netns/96b93c7f-0383-4b2b-884b-7c1ac3c2c175 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00034e5f0}] Aliases:map[]}"
	Oct 17 18:59:09 addons-642189 crio[766]: time="2025-10-17T18:59:09.914577102Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 17 18:59:09 addons-642189 crio[766]: time="2025-10-17T18:59:09.925013408Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0f6149e8c6f6ac05b41d0672fc146290c3436fdd19ea9067f149a3f96b32f840 UID:11e3a0a3-e413-4307-8a33-7461887a2188 NetNS:/var/run/netns/96b93c7f-0383-4b2b-884b-7c1ac3c2c175 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00034e5f0}] Aliases:map[]}"
	Oct 17 18:59:09 addons-642189 crio[766]: time="2025-10-17T18:59:09.925154227Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 17 18:59:09 addons-642189 crio[766]: time="2025-10-17T18:59:09.926245883Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 18:59:09 addons-642189 crio[766]: time="2025-10-17T18:59:09.927443943Z" level=info msg="Ran pod sandbox 0f6149e8c6f6ac05b41d0672fc146290c3436fdd19ea9067f149a3f96b32f840 with infra container: default/busybox/POD" id=7aae1042-5de6-4d9c-8da4-62c6901686f2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 18:59:09 addons-642189 crio[766]: time="2025-10-17T18:59:09.928859048Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dc8c7d44-4814-4d84-81c7-92c1b8ec8278 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 18:59:09 addons-642189 crio[766]: time="2025-10-17T18:59:09.928976612Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=dc8c7d44-4814-4d84-81c7-92c1b8ec8278 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 18:59:09 addons-642189 crio[766]: time="2025-10-17T18:59:09.929009902Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=dc8c7d44-4814-4d84-81c7-92c1b8ec8278 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 18:59:09 addons-642189 crio[766]: time="2025-10-17T18:59:09.929747206Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7c868a43-609e-40e9-b112-b8eabb7f5902 name=/runtime.v1.ImageService/PullImage
	Oct 17 18:59:09 addons-642189 crio[766]: time="2025-10-17T18:59:09.931417173Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 17 18:59:10 addons-642189 crio[766]: time="2025-10-17T18:59:10.6489889Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=7c868a43-609e-40e9-b112-b8eabb7f5902 name=/runtime.v1.ImageService/PullImage
	Oct 17 18:59:10 addons-642189 crio[766]: time="2025-10-17T18:59:10.649733637Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ac468247-b261-489c-b292-2554e5e92fcb name=/runtime.v1.ImageService/ImageStatus
	Oct 17 18:59:10 addons-642189 crio[766]: time="2025-10-17T18:59:10.651207443Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b32770dd-1d55-4b13-bdba-bdce51f0fa2b name=/runtime.v1.ImageService/ImageStatus
	Oct 17 18:59:10 addons-642189 crio[766]: time="2025-10-17T18:59:10.654779622Z" level=info msg="Creating container: default/busybox/busybox" id=9a7c10aa-cd20-4dc4-907e-8a5e6eb3a9cc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 18:59:10 addons-642189 crio[766]: time="2025-10-17T18:59:10.655466558Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 18:59:10 addons-642189 crio[766]: time="2025-10-17T18:59:10.663833872Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 18:59:10 addons-642189 crio[766]: time="2025-10-17T18:59:10.664619884Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 18:59:10 addons-642189 crio[766]: time="2025-10-17T18:59:10.719534659Z" level=info msg="Created container 05c40821c0ea18b27c74860879d249ab0074fa151a9157cec9e791577d5a6cdc: default/busybox/busybox" id=9a7c10aa-cd20-4dc4-907e-8a5e6eb3a9cc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 18:59:10 addons-642189 crio[766]: time="2025-10-17T18:59:10.72063236Z" level=info msg="Starting container: 05c40821c0ea18b27c74860879d249ab0074fa151a9157cec9e791577d5a6cdc" id=63e28ecd-f0a6-4c2c-a71f-9136305abfd1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 18:59:10 addons-642189 crio[766]: time="2025-10-17T18:59:10.723147661Z" level=info msg="Started container" PID=6648 containerID=05c40821c0ea18b27c74860879d249ab0074fa151a9157cec9e791577d5a6cdc description=default/busybox/busybox id=63e28ecd-f0a6-4c2c-a71f-9136305abfd1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0f6149e8c6f6ac05b41d0672fc146290c3436fdd19ea9067f149a3f96b32f840
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	05c40821c0ea1       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   0f6149e8c6f6a       busybox                                     default
	621b748d53884       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          47 seconds ago       Running             csi-snapshotter                          0                   af13f6543313d       csi-hostpathplugin-5kdtq                    kube-system
	317712e1d5627       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          49 seconds ago       Running             csi-provisioner                          0                   af13f6543313d       csi-hostpathplugin-5kdtq                    kube-system
	c8951bd4e7631       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            50 seconds ago       Running             liveness-probe                           0                   af13f6543313d       csi-hostpathplugin-5kdtq                    kube-system
	6073132bac88b       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           50 seconds ago       Running             hostpath                                 0                   af13f6543313d       csi-hostpathplugin-5kdtq                    kube-system
	655687219dc3a       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             51 seconds ago       Running             controller                               0                   e2ab0f3f62d10       ingress-nginx-controller-675c5ddd98-m2d8d   ingress-nginx
	b4ac0698e398e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 55 seconds ago       Running             gcp-auth                                 0                   770e528328818       gcp-auth-78565c9fb4-qz4xs                   gcp-auth
	d3882b8636526       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                57 seconds ago       Running             node-driver-registrar                    0                   af13f6543313d       csi-hostpathplugin-5kdtq                    kube-system
	c9c4e61a00241       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            57 seconds ago       Running             gadget                                   0                   c0d1f662108bb       gadget-862fn                                gadget
	99fe19979e6f7       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              About a minute ago   Running             registry-proxy                           0                   809255c8d95fe       registry-proxy-7wchq                        kube-system
	600ce5e0b6a85       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   About a minute ago   Running             csi-external-health-monitor-controller   0                   af13f6543313d       csi-hostpathplugin-5kdtq                    kube-system
	da14c2626c054       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             About a minute ago   Exited              patch                                    2                   d995aaab5f207       ingress-nginx-admission-patch-bm6p2         ingress-nginx
	214596c066d6e       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   ac43d3b720928       nvidia-device-plugin-daemonset-5272k        kube-system
	cd49fb8b1ee5c       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   312361186cc37       csi-hostpath-resizer-0                      kube-system
	b3f4b36a5cb43       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   3e25684aa0e80       snapshot-controller-7d9fbc56b8-x4f9r        kube-system
	fc47a341f594c       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     About a minute ago   Running             amd-gpu-device-plugin                    0                   c3b6c1f87ffa0       amd-gpu-device-plugin-t48xm                 kube-system
	dc706332dbb69       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              create                                   0                   e75136860bf4b       ingress-nginx-admission-create-xlhk6        ingress-nginx
	d3140eef7e893       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   7da1353dd02e9       snapshot-controller-7d9fbc56b8-qxcgb        kube-system
	bce8d27694469       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   92e6db4838030       csi-hostpath-attacher-0                     kube-system
	26a77f9d8fd20       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   6ac10e16c389d       kube-ingress-dns-minikube                   kube-system
	afa9f6b049681       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   b79875ef65108       registry-6b586f9694-gfg4q                   kube-system
	33c6465e1a0d9       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              About a minute ago   Running             yakd                                     0                   b65a84b02008a       yakd-dashboard-5ff678cb9-76bx8              yakd-dashboard
	b6fecbd31e3b0       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   5d92653513a48       local-path-provisioner-648f6765c9-7cp9v     local-path-storage
	ea8c7aa6a69f9       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   a38ba07469845       metrics-server-85b7d694d7-7d6xn             kube-system
	7f62e9677624b       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               About a minute ago   Running             cloud-spanner-emulator                   0                   3f67b44522228       cloud-spanner-emulator-86bd5cbb97-fbjhl     default
	05b0d75fa7e33       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   d3ca4b2a3eaa6       storage-provisioner                         kube-system
	c8959e94a4c12       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   0067f27233069       coredns-66bc5c9577-9qzb6                    kube-system
	d6a7317aabf4d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             2 minutes ago        Running             kindnet-cni                              0                   7290efc14442b       kindnet-6gk89                               kube-system
	49aea2d7818a2       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             2 minutes ago        Running             kube-proxy                               0                   94d580da7a351       kube-proxy-n4pk6                            kube-system
	43e40655463cf       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             2 minutes ago        Running             kube-scheduler                           0                   55fed3d15ddf2       kube-scheduler-addons-642189                kube-system
	8b60fdbdcbbd6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             2 minutes ago        Running             etcd                                     0                   79c4f5c85b94c       etcd-addons-642189                          kube-system
	44a3d62e9e439       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             2 minutes ago        Running             kube-controller-manager                  0                   2cb9ab7dc6af7       kube-controller-manager-addons-642189       kube-system
	a76bbc48e30da       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             2 minutes ago        Running             kube-apiserver                           0                   b1c6a14f84229       kube-apiserver-addons-642189                kube-system
	
	
	==> coredns [c8959e94a4c121db6d2c59fccf2f1725ca1521aca59330c8262847404ff4a854] <==
	[INFO] 10.244.0.13:33933 - 42923 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003635105s
	[INFO] 10.244.0.13:39492 - 31195 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000102379s
	[INFO] 10.244.0.13:39492 - 30744 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000149885s
	[INFO] 10.244.0.13:54436 - 63982 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000070961s
	[INFO] 10.244.0.13:54436 - 63495 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000067933s
	[INFO] 10.244.0.13:53375 - 45093 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000070859s
	[INFO] 10.244.0.13:53375 - 44873 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000112258s
	[INFO] 10.244.0.13:38506 - 55001 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000113822s
	[INFO] 10.244.0.13:38506 - 55227 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000135002s
	[INFO] 10.244.0.21:60599 - 866 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000221916s
	[INFO] 10.244.0.21:44128 - 45980 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000297712s
	[INFO] 10.244.0.21:57031 - 887 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000153908s
	[INFO] 10.244.0.21:37989 - 26730 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000221509s
	[INFO] 10.244.0.21:44702 - 8198 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00015611s
	[INFO] 10.244.0.21:41185 - 11984 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000193283s
	[INFO] 10.244.0.21:45362 - 39194 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003215759s
	[INFO] 10.244.0.21:40941 - 22270 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003283267s
	[INFO] 10.244.0.21:60902 - 25921 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004885537s
	[INFO] 10.244.0.21:40884 - 8865 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.007036755s
	[INFO] 10.244.0.21:55124 - 16399 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004380194s
	[INFO] 10.244.0.21:46667 - 54522 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004788681s
	[INFO] 10.244.0.21:41007 - 44985 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00427368s
	[INFO] 10.244.0.21:55813 - 25189 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006085568s
	[INFO] 10.244.0.21:59010 - 58996 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001063275s
	[INFO] 10.244.0.21:34140 - 24710 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002341588s
	
	
	==> describe nodes <==
	Name:               addons-642189
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-642189
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=addons-642189
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T18_57_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-642189
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-642189"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 18:56:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-642189
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 18:59:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 18:58:42 +0000   Fri, 17 Oct 2025 18:56:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 18:58:42 +0000   Fri, 17 Oct 2025 18:56:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 18:58:42 +0000   Fri, 17 Oct 2025 18:56:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 18:58:42 +0000   Fri, 17 Oct 2025 18:57:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-642189
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                bdcb748b-3e8d-4cb8-92a6-69cb543c2625
	  Boot ID:                    c8616e78-d085-41cd-a329-f2bcfd9cfa15
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-86bd5cbb97-fbjhl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  gadget                      gadget-862fn                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  gcp-auth                    gcp-auth-78565c9fb4-qz4xs                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-m2d8d    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         2m11s
	  kube-system                 amd-gpu-device-plugin-t48xm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-9qzb6                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m13s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 csi-hostpathplugin-5kdtq                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 etcd-addons-642189                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m18s
	  kube-system                 kindnet-6gk89                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m13s
	  kube-system                 kube-apiserver-addons-642189                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-controller-manager-addons-642189        200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-proxy-n4pk6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-scheduler-addons-642189                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 metrics-server-85b7d694d7-7d6xn              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         2m11s
	  kube-system                 nvidia-device-plugin-daemonset-5272k         0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 registry-6b586f9694-gfg4q                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 registry-creds-764b6fb674-wpqx2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 registry-proxy-7wchq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 snapshot-controller-7d9fbc56b8-qxcgb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 snapshot-controller-7d9fbc56b8-x4f9r         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  local-path-storage          local-path-provisioner-648f6765c9-7cp9v      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-76bx8               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     2m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m12s  kube-proxy       
	  Normal  Starting                 2m18s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m18s  kubelet          Node addons-642189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m18s  kubelet          Node addons-642189 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m18s  kubelet          Node addons-642189 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m14s  node-controller  Node addons-642189 event: Registered Node addons-642189 in Controller
	  Normal  NodeReady                92s    kubelet          Node addons-642189 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a 2f 6c e1 4b 55 08 06
	[Oct17 18:19] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 96 02 c8 63 52 74 08 06
	[  +0.000452] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 2f 6c e1 4b 55 08 06
	[  +3.368183] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 07 8d 7d ba 08 08 06
	[  +0.010471] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 20 eb f3 5e da 08 06
	[ +40.138195] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 c9 4b ee 3b 17 08 06
	[  +4.024015] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 a9 2b 44 da ae 08 06
	[  +2.326155] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 3e c1 b1 8b d1 08 06
	[  +0.000336] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 20 eb f3 5e da 08 06
	[Oct17 18:20] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 21 ed 61 7b 76 08 06
	[  +0.000430] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 c9 4b ee 3b 17 08 06
	[ +31.393014] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 d1 49 91 03 c2 08 06
	[  +0.000804] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 a9 2b 44 da ae 08 06
	
	
	==> etcd [8b60fdbdcbbd68792cea1184b624381c87a1f1eed5a416aa91d0007baad72c0d] <==
	{"level":"warn","ts":"2025-10-17T18:56:58.093476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:56:58.100531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:56:58.106750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:56:58.113186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:56:58.119549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:56:58.126275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:56:58.133362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:56:58.143251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:56:58.149623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:56:58.156016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:56:58.208425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:09.583634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:09.605057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:35.635447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:35.642496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:35.665185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39598","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T18:57:56.085532Z","caller":"traceutil/trace.go:172","msg":"trace[120260436] transaction","detail":"{read_only:false; response_revision:982; number_of_response:1; }","duration":"115.537819ms","start":"2025-10-17T18:57:55.969972Z","end":"2025-10-17T18:57:56.085510Z","steps":["trace[120260436] 'process raft request'  (duration: 115.315749ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T18:58:08.633548Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.724638ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gcp-auth/gcp-auth-certs-patch-rdzx2.186f5c52347a130d\" limit:1 ","response":"range_response_count:1 size:841"}
	{"level":"info","ts":"2025-10-17T18:58:08.633661Z","caller":"traceutil/trace.go:172","msg":"trace[677186799] range","detail":"{range_begin:/registry/events/gcp-auth/gcp-auth-certs-patch-rdzx2.186f5c52347a130d; range_end:; response_count:1; response_revision:1070; }","duration":"123.867582ms","start":"2025-10-17T18:58:08.509770Z","end":"2025-10-17T18:58:08.633638Z","steps":["trace[677186799] 'agreement among raft nodes before linearized reading'  (duration: 87.790869ms)","trace[677186799] 'range keys from in-memory index tree'  (duration: 35.826097ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T18:58:08.633699Z","caller":"traceutil/trace.go:172","msg":"trace[673993011] transaction","detail":"{read_only:false; response_revision:1072; number_of_response:1; }","duration":"127.047611ms","start":"2025-10-17T18:58:08.506621Z","end":"2025-10-17T18:58:08.633668Z","steps":["trace[673993011] 'process raft request'  (duration: 126.95624ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T18:58:08.633704Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.29337ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T18:58:08.633755Z","caller":"traceutil/trace.go:172","msg":"trace[391353548] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1072; }","duration":"111.383687ms","start":"2025-10-17T18:58:08.522364Z","end":"2025-10-17T18:58:08.633748Z","steps":["trace[391353548] 'agreement among raft nodes before linearized reading'  (duration: 111.276414ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T18:58:08.633799Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.662525ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T18:58:08.633742Z","caller":"traceutil/trace.go:172","msg":"trace[891336695] transaction","detail":"{read_only:false; response_revision:1071; number_of_response:1; }","duration":"156.790331ms","start":"2025-10-17T18:58:08.476930Z","end":"2025-10-17T18:58:08.633720Z","steps":["trace[891336695] 'process raft request'  (duration: 120.584837ms)","trace[891336695] 'compare'  (duration: 35.926156ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T18:58:08.633826Z","caller":"traceutil/trace.go:172","msg":"trace[1379347943] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1072; }","duration":"110.693619ms","start":"2025-10-17T18:58:08.523126Z","end":"2025-10-17T18:58:08.633820Z","steps":["trace[1379347943] 'agreement among raft nodes before linearized reading'  (duration: 110.643318ms)"],"step_count":1}
	
	
	==> gcp-auth [b4ac0698e398ee3ec3bf7468238bcff34349540a931e90303960359cdb3c9e91] <==
	2025/10/17 18:58:23 GCP Auth Webhook started!
	2025/10/17 18:59:09 Ready to marshal response ...
	2025/10/17 18:59:09 Ready to write response ...
	2025/10/17 18:59:09 Ready to marshal response ...
	2025/10/17 18:59:09 Ready to write response ...
	2025/10/17 18:59:09 Ready to marshal response ...
	2025/10/17 18:59:09 Ready to write response ...
	
	
	==> kernel <==
	 18:59:19 up  2:41,  0 user,  load average: 0.80, 0.63, 0.71
	Linux addons-642189 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d6a7317aabf4df8eb271b0bf784be0c6045d3ed3d186ebfc5869cb018026ecfd] <==
	E1017 18:57:37.272632       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 18:57:37.307614       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1017 18:57:38.472619       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 18:57:38.472665       1 metrics.go:72] Registering metrics
	I1017 18:57:38.472766       1 controller.go:711] "Syncing nftables rules"
	I1017 18:57:47.271992       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 18:57:47.272038       1 main.go:301] handling current node
	I1017 18:57:57.272043       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 18:57:57.272090       1 main.go:301] handling current node
	I1017 18:58:07.271198       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 18:58:07.271250       1 main.go:301] handling current node
	I1017 18:58:17.272005       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 18:58:17.272072       1 main.go:301] handling current node
	I1017 18:58:27.271965       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 18:58:27.272001       1 main.go:301] handling current node
	I1017 18:58:37.271325       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 18:58:37.271362       1 main.go:301] handling current node
	I1017 18:58:47.271487       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 18:58:47.271532       1 main.go:301] handling current node
	I1017 18:58:57.272179       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 18:58:57.272215       1 main.go:301] handling current node
	I1017 18:59:07.272099       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 18:59:07.272133       1 main.go:301] handling current node
	I1017 18:59:17.272272       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 18:59:17.272305       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a76bbc48e30da642f43c612cdc6a0a786d2a6d1c4942a22be68e5c4a9a6f40f9] <==
	W1017 18:57:09.600558       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1017 18:57:15.146545       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.100.197.54"}
	W1017 18:57:35.635414       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1017 18:57:35.642429       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1017 18:57:35.658116       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1017 18:57:35.665149       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1017 18:57:47.435943       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.197.54:443: connect: connection refused
	E1017 18:57:47.435998       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.197.54:443: connect: connection refused" logger="UnhandledError"
	W1017 18:57:47.435964       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.197.54:443: connect: connection refused
	E1017 18:57:47.436064       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.197.54:443: connect: connection refused" logger="UnhandledError"
	W1017 18:57:47.459589       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.197.54:443: connect: connection refused
	E1017 18:57:47.459634       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.197.54:443: connect: connection refused" logger="UnhandledError"
	W1017 18:57:47.461401       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.197.54:443: connect: connection refused
	E1017 18:57:47.461438       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.197.54:443: connect: connection refused" logger="UnhandledError"
	E1017 18:57:53.448979       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.234.99:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.234.99:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.234.99:443: connect: connection refused" logger="UnhandledError"
	W1017 18:57:53.449252       1 handler_proxy.go:99] no RequestInfo found in the context
	E1017 18:57:53.449336       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1017 18:57:53.450116       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.234.99:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.234.99:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.234.99:443: connect: connection refused" logger="UnhandledError"
	E1017 18:57:53.455433       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.234.99:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.234.99:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.234.99:443: connect: connection refused" logger="UnhandledError"
	I1017 18:57:53.510561       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1017 18:59:17.759026       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54558: use of closed network connection
	E1017 18:59:17.919788       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54570: use of closed network connection
	
	
	==> kube-controller-manager [44a3d62e9e439ad0c55eef8ceec2ced7e9b2897150b415717801bf2686765caa] <==
	I1017 18:57:05.620987       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 18:57:05.620639       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 18:57:05.620716       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 18:57:05.621016       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 18:57:05.621096       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-642189"
	I1017 18:57:05.621012       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 18:57:05.620716       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 18:57:05.621159       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1017 18:57:05.621202       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 18:57:05.621318       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1017 18:57:05.621719       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 18:57:05.624443       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 18:57:05.624500       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 18:57:05.626984       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 18:57:05.633186       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 18:57:05.636439       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1017 18:57:08.196556       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1017 18:57:35.629222       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1017 18:57:35.629390       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1017 18:57:35.629445       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1017 18:57:35.645511       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1017 18:57:35.652568       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1017 18:57:35.730334       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 18:57:35.752820       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 18:57:50.628260       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [49aea2d7818a2bf7202542ba97e6f5d99fd7c496045a1c05fbd5332046a05e6f] <==
	I1017 18:57:06.846087       1 server_linux.go:53] "Using iptables proxy"
	I1017 18:57:06.942356       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 18:57:07.047726       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 18:57:07.047784       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 18:57:07.047889       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 18:57:07.178574       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 18:57:07.178782       1 server_linux.go:132] "Using iptables Proxier"
	I1017 18:57:07.194632       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 18:57:07.196450       1 server.go:527] "Version info" version="v1.34.1"
	I1017 18:57:07.196490       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 18:57:07.202121       1 config.go:200] "Starting service config controller"
	I1017 18:57:07.205933       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 18:57:07.202570       1 config.go:309] "Starting node config controller"
	I1017 18:57:07.205970       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 18:57:07.205976       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 18:57:07.202910       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 18:57:07.205984       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 18:57:07.202898       1 config.go:106] "Starting endpoint slice config controller"
	I1017 18:57:07.205995       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 18:57:07.307295       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 18:57:07.307367       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 18:57:07.319175       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [43e40655463cffe530b5aa16eb8ff13e3891f57f9034e26ef39cd927af2c8e4a] <==
	E1017 18:56:58.633747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 18:56:58.633666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 18:56:58.633479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 18:56:58.633635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 18:56:58.633892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 18:56:58.634008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 18:56:58.634053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 18:56:58.634062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 18:56:58.634134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 18:56:58.634132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 18:56:58.634249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 18:56:58.634288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 18:56:58.634334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 18:56:58.634341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 18:56:58.634341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 18:56:58.634947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 18:56:59.462129       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 18:56:59.480591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 18:56:59.535584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 18:56:59.554025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 18:56:59.604356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1017 18:56:59.693315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 18:56:59.698296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 18:56:59.787731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1017 18:57:01.831750       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 18:58:19 addons-642189 kubelet[1283]: I1017 18:58:19.539351    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-7wchq" podStartSLOduration=1.538744235 podStartE2EDuration="32.539328336s" podCreationTimestamp="2025-10-17 18:57:47 +0000 UTC" firstStartedPulling="2025-10-17 18:57:47.892115221 +0000 UTC m=+46.765071601" lastFinishedPulling="2025-10-17 18:58:18.89269932 +0000 UTC m=+77.765655702" observedRunningTime="2025-10-17 18:58:19.537135683 +0000 UTC m=+78.410092080" watchObservedRunningTime="2025-10-17 18:58:19.539328336 +0000 UTC m=+78.412284736"
	Oct 17 18:58:20 addons-642189 kubelet[1283]: I1017 18:58:20.533035    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-7wchq" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 18:58:22 addons-642189 kubelet[1283]: I1017 18:58:22.214558    1283 scope.go:117] "RemoveContainer" containerID="181115e0ccdb9bd8a7d17ea971f2cc3fb46177a083dba6ae4340a436789065cc"
	Oct 17 18:58:22 addons-642189 kubelet[1283]: I1017 18:58:22.546908    1283 scope.go:117] "RemoveContainer" containerID="181115e0ccdb9bd8a7d17ea971f2cc3fb46177a083dba6ae4340a436789065cc"
	Oct 17 18:58:22 addons-642189 kubelet[1283]: I1017 18:58:22.562461    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-862fn" podStartSLOduration=67.851215579 podStartE2EDuration="1m14.562440482s" podCreationTimestamp="2025-10-17 18:57:08 +0000 UTC" firstStartedPulling="2025-10-17 18:58:14.953997722 +0000 UTC m=+73.826954110" lastFinishedPulling="2025-10-17 18:58:21.665222625 +0000 UTC m=+80.538179013" observedRunningTime="2025-10-17 18:58:22.561758697 +0000 UTC m=+81.434715094" watchObservedRunningTime="2025-10-17 18:58:22.562440482 +0000 UTC m=+81.435396879"
	Oct 17 18:58:23 addons-642189 kubelet[1283]: I1017 18:58:23.808926    1283 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tkkx\" (UniqueName: \"kubernetes.io/projected/6b35c968-6ea4-4090-a43e-ea607b6e8916-kube-api-access-8tkkx\") pod \"6b35c968-6ea4-4090-a43e-ea607b6e8916\" (UID: \"6b35c968-6ea4-4090-a43e-ea607b6e8916\") "
	Oct 17 18:58:23 addons-642189 kubelet[1283]: I1017 18:58:23.811909    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b35c968-6ea4-4090-a43e-ea607b6e8916-kube-api-access-8tkkx" (OuterVolumeSpecName: "kube-api-access-8tkkx") pod "6b35c968-6ea4-4090-a43e-ea607b6e8916" (UID: "6b35c968-6ea4-4090-a43e-ea607b6e8916"). InnerVolumeSpecName "kube-api-access-8tkkx". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 17 18:58:23 addons-642189 kubelet[1283]: I1017 18:58:23.910466    1283 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8tkkx\" (UniqueName: \"kubernetes.io/projected/6b35c968-6ea4-4090-a43e-ea607b6e8916-kube-api-access-8tkkx\") on node \"addons-642189\" DevicePath \"\""
	Oct 17 18:58:24 addons-642189 kubelet[1283]: I1017 18:58:24.564562    1283 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f916396ee7e2b6ff5870d9d7c9bf2823cfdec337abc2e692fee531c6b540cbe5"
	Oct 17 18:58:24 addons-642189 kubelet[1283]: I1017 18:58:24.579622    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-qz4xs" podStartSLOduration=65.703626834 podStartE2EDuration="1m9.579598522s" podCreationTimestamp="2025-10-17 18:57:15 +0000 UTC" firstStartedPulling="2025-10-17 18:58:19.838551662 +0000 UTC m=+78.711508039" lastFinishedPulling="2025-10-17 18:58:23.714523339 +0000 UTC m=+82.587479727" observedRunningTime="2025-10-17 18:58:24.579401197 +0000 UTC m=+83.452357594" watchObservedRunningTime="2025-10-17 18:58:24.579598522 +0000 UTC m=+83.452554929"
	Oct 17 18:58:28 addons-642189 kubelet[1283]: I1017 18:58:28.596971    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-m2d8d" podStartSLOduration=72.755447963 podStartE2EDuration="1m20.596945766s" podCreationTimestamp="2025-10-17 18:57:08 +0000 UTC" firstStartedPulling="2025-10-17 18:58:19.893365869 +0000 UTC m=+78.766322262" lastFinishedPulling="2025-10-17 18:58:27.734863686 +0000 UTC m=+86.607820065" observedRunningTime="2025-10-17 18:58:28.595851186 +0000 UTC m=+87.468807583" watchObservedRunningTime="2025-10-17 18:58:28.596945766 +0000 UTC m=+87.469902165"
	Oct 17 18:58:29 addons-642189 kubelet[1283]: I1017 18:58:29.282533    1283 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 17 18:58:29 addons-642189 kubelet[1283]: I1017 18:58:29.282568    1283 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 17 18:58:32 addons-642189 kubelet[1283]: I1017 18:58:32.627526    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-5kdtq" podStartSLOduration=1.906517418 podStartE2EDuration="45.627501357s" podCreationTimestamp="2025-10-17 18:57:47 +0000 UTC" firstStartedPulling="2025-10-17 18:57:47.890738277 +0000 UTC m=+46.763694653" lastFinishedPulling="2025-10-17 18:58:31.611722207 +0000 UTC m=+90.484678592" observedRunningTime="2025-10-17 18:58:32.626537875 +0000 UTC m=+91.499494273" watchObservedRunningTime="2025-10-17 18:58:32.627501357 +0000 UTC m=+91.500457754"
	Oct 17 18:58:49 addons-642189 kubelet[1283]: I1017 18:58:49.217220    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e9d7ea8-5cb1-439d-b657-5a684bfa0d70" path="/var/lib/kubelet/pods/6e9d7ea8-5cb1-439d-b657-5a684bfa0d70/volumes"
	Oct 17 18:58:51 addons-642189 kubelet[1283]: E1017 18:58:51.336581    1283 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 17 18:58:51 addons-642189 kubelet[1283]: E1017 18:58:51.336697    1283 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff764293-9993-42e2-aed2-de34ffce5c63-gcr-creds podName:ff764293-9993-42e2-aed2-de34ffce5c63 nodeName:}" failed. No retries permitted until 2025-10-17 18:59:55.336667114 +0000 UTC m=+174.209623493 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/ff764293-9993-42e2-aed2-de34ffce5c63-gcr-creds") pod "registry-creds-764b6fb674-wpqx2" (UID: "ff764293-9993-42e2-aed2-de34ffce5c63") : secret "registry-creds-gcr" not found
	Oct 17 18:58:55 addons-642189 kubelet[1283]: I1017 18:58:55.217207    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b35c968-6ea4-4090-a43e-ea607b6e8916" path="/var/lib/kubelet/pods/6b35c968-6ea4-4090-a43e-ea607b6e8916/volumes"
	Oct 17 18:59:01 addons-642189 kubelet[1283]: I1017 18:59:01.507228    1283 scope.go:117] "RemoveContainer" containerID="2e1e158d2a68da1f76932be037731904e5b9cbc9434c9784d70db25955f0ab86"
	Oct 17 18:59:01 addons-642189 kubelet[1283]: I1017 18:59:01.518478    1283 scope.go:117] "RemoveContainer" containerID="c6ebab856ea5de70587af236e7f373e541584db37fe0cf61b65bddc3d3a82a5d"
	Oct 17 18:59:09 addons-642189 kubelet[1283]: I1017 18:59:09.685446    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzzq8\" (UniqueName: \"kubernetes.io/projected/11e3a0a3-e413-4307-8a33-7461887a2188-kube-api-access-fzzq8\") pod \"busybox\" (UID: \"11e3a0a3-e413-4307-8a33-7461887a2188\") " pod="default/busybox"
	Oct 17 18:59:09 addons-642189 kubelet[1283]: I1017 18:59:09.685512    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/11e3a0a3-e413-4307-8a33-7461887a2188-gcp-creds\") pod \"busybox\" (UID: \"11e3a0a3-e413-4307-8a33-7461887a2188\") " pod="default/busybox"
	Oct 17 18:59:10 addons-642189 kubelet[1283]: I1017 18:59:10.771886    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.05062126 podStartE2EDuration="1.771859876s" podCreationTimestamp="2025-10-17 18:59:09 +0000 UTC" firstStartedPulling="2025-10-17 18:59:09.929329237 +0000 UTC m=+128.802285616" lastFinishedPulling="2025-10-17 18:59:10.650567855 +0000 UTC m=+129.523524232" observedRunningTime="2025-10-17 18:59:10.771116485 +0000 UTC m=+129.644072934" watchObservedRunningTime="2025-10-17 18:59:10.771859876 +0000 UTC m=+129.644816272"
	Oct 17 18:59:17 addons-642189 kubelet[1283]: E1017 18:59:17.758957    1283 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:32876->127.0.0.1:45019: write tcp 127.0.0.1:32876->127.0.0.1:45019: write: broken pipe
	Oct 17 18:59:17 addons-642189 kubelet[1283]: E1017 18:59:17.919712    1283 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:32888->127.0.0.1:45019: write tcp 127.0.0.1:32888->127.0.0.1:45019: write: broken pipe
	
	
	==> storage-provisioner [05b0d75fa7e337102c5b778d87f16ae508e704efb9367ba5a98cc93f0460d03c] <==
	W1017 18:58:54.445061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:58:56.448162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:58:56.452621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:58:58.456005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:58:58.461921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:00.465159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:00.470140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:02.473661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:02.477921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:04.481482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:04.487186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:06.490255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:06.495842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:08.500203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:08.504443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:10.508898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:10.515861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:12.519428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:12.524011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:14.527523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:14.533052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:16.536440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:16.540781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:18.544118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:18.549954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-642189 -n addons-642189
helpers_test.go:269: (dbg) Run:  kubectl --context addons-642189 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-xlhk6 ingress-nginx-admission-patch-bm6p2 registry-creds-764b6fb674-wpqx2
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-642189 describe pod ingress-nginx-admission-create-xlhk6 ingress-nginx-admission-patch-bm6p2 registry-creds-764b6fb674-wpqx2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-642189 describe pod ingress-nginx-admission-create-xlhk6 ingress-nginx-admission-patch-bm6p2 registry-creds-764b6fb674-wpqx2: exit status 1 (63.70114ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xlhk6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bm6p2" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-wpqx2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-642189 describe pod ingress-nginx-admission-create-xlhk6 ingress-nginx-admission-patch-bm6p2 registry-creds-764b6fb674-wpqx2: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-642189 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-642189 addons disable headlamp --alsologtostderr -v=1: exit status 11 (243.932204ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 18:59:20.577293  506524 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:59:20.577549  506524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:20.577557  506524 out.go:374] Setting ErrFile to fd 2...
	I1017 18:59:20.577561  506524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:20.577801  506524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 18:59:20.578090  506524 mustload.go:65] Loading cluster: addons-642189
	I1017 18:59:20.578458  506524 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:20.578473  506524 addons.go:606] checking whether the cluster is paused
	I1017 18:59:20.578549  506524 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:20.578561  506524 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:59:20.578987  506524 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:59:20.597257  506524 ssh_runner.go:195] Run: systemctl --version
	I1017 18:59:20.597314  506524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:59:20.615456  506524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:59:20.711816  506524 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 18:59:20.711909  506524 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 18:59:20.743891  506524 cri.go:89] found id: "621b748d538846f79bca883df087ce87a58a6e5cc5dbbb8f2ae4845785e122d6"
	I1017 18:59:20.743914  506524 cri.go:89] found id: "317712e1d5627e3d52413fcacd6f0a3e40e74b682567b7117acb2ddbf4da2a72"
	I1017 18:59:20.743918  506524 cri.go:89] found id: "c8951bd4e7631f9e6fa9ad944251500dd44cda63d891f7b553931aa3ef22e7e7"
	I1017 18:59:20.743922  506524 cri.go:89] found id: "6073132bac88bd54a1f9014aa1b74b68de0ac557ac483e5c0a7ff51ac939a2dd"
	I1017 18:59:20.743924  506524 cri.go:89] found id: "d3882b8636526fe7f302d3351b33fc68a8df5109463693af6642869704a2b6a2"
	I1017 18:59:20.743928  506524 cri.go:89] found id: "99fe19979e6f782cb4ff1df09e72c9c58535540daf8c28b63b4a3f1719cfa365"
	I1017 18:59:20.743931  506524 cri.go:89] found id: "600ce5e0b6a8556aa7c055afc13692cb00b2d6f0a82ba6d0817e5e424b49881c"
	I1017 18:59:20.743933  506524 cri.go:89] found id: "214596c066d6ebce81c069cda2c2790ee022d3770221e7e183390decc49e626b"
	I1017 18:59:20.743936  506524 cri.go:89] found id: "cd49fb8b1ee5ca17b886edf352059ec13c3aa8fed46c8383c6660656f0403d67"
	I1017 18:59:20.743947  506524 cri.go:89] found id: "b3f4b36a5cb43ddf3de225e5f08ad4b2d165ae6234c165082ae1316a48f48425"
	I1017 18:59:20.743956  506524 cri.go:89] found id: "fc47a341f594c2a4203992ac73a7c89fb4722c54e399f0949c3604dfa81f70ef"
	I1017 18:59:20.743959  506524 cri.go:89] found id: "d3140eef7e893c98db6a57843c26dd0767733610bfaab45265577ad3a64a334e"
	I1017 18:59:20.743961  506524 cri.go:89] found id: "bce8d27694469a014387080e94f416e3cfb88071ea69506ea8a1d04b16176e43"
	I1017 18:59:20.743964  506524 cri.go:89] found id: "26a77f9d8fd20d481d8ec7b0d85a65954b10af33ae4994293044ad2067b41872"
	I1017 18:59:20.743966  506524 cri.go:89] found id: "afa9f6b0496818adf412ab2c3cf979e86f2593796860b7b9e53c8fd85f0fe586"
	I1017 18:59:20.743973  506524 cri.go:89] found id: "ea8c7aa6a69f9b8476c9c28a6ac0944597fdf78921727f3c142c41a2b6a9bb00"
	I1017 18:59:20.743976  506524 cri.go:89] found id: "05b0d75fa7e337102c5b778d87f16ae508e704efb9367ba5a98cc93f0460d03c"
	I1017 18:59:20.743979  506524 cri.go:89] found id: "c8959e94a4c121db6d2c59fccf2f1725ca1521aca59330c8262847404ff4a854"
	I1017 18:59:20.743982  506524 cri.go:89] found id: "d6a7317aabf4df8eb271b0bf784be0c6045d3ed3d186ebfc5869cb018026ecfd"
	I1017 18:59:20.743984  506524 cri.go:89] found id: "49aea2d7818a2bf7202542ba97e6f5d99fd7c496045a1c05fbd5332046a05e6f"
	I1017 18:59:20.743986  506524 cri.go:89] found id: "43e40655463cffe530b5aa16eb8ff13e3891f57f9034e26ef39cd927af2c8e4a"
	I1017 18:59:20.743988  506524 cri.go:89] found id: "8b60fdbdcbbd68792cea1184b624381c87a1f1eed5a416aa91d0007baad72c0d"
	I1017 18:59:20.743991  506524 cri.go:89] found id: "44a3d62e9e439ad0c55eef8ceec2ced7e9b2897150b415717801bf2686765caa"
	I1017 18:59:20.743993  506524 cri.go:89] found id: "a76bbc48e30da642f43c612cdc6a0a786d2a6d1c4942a22be68e5c4a9a6f40f9"
	I1017 18:59:20.743996  506524 cri.go:89] found id: ""
	I1017 18:59:20.744033  506524 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 18:59:20.760203  506524 out.go:203] 
	W1017 18:59:20.761625  506524 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 18:59:20.761660  506524 out.go:285] * 
	* 
	W1017 18:59:20.766057  506524 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 18:59:20.767478  506524 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-642189 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.61s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-fbjhl" [5db5e4d2-9458-4f8a-aaa2-6ca57bb69b71] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003918516s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-642189 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-642189 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (243.394816ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 18:59:29.484547  507073 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:59:29.484808  507073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:29.484816  507073 out.go:374] Setting ErrFile to fd 2...
	I1017 18:59:29.484820  507073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:29.485033  507073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 18:59:29.485306  507073 mustload.go:65] Loading cluster: addons-642189
	I1017 18:59:29.485646  507073 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:29.485659  507073 addons.go:606] checking whether the cluster is paused
	I1017 18:59:29.485761  507073 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:29.485774  507073 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:59:29.486164  507073 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:59:29.505037  507073 ssh_runner.go:195] Run: systemctl --version
	I1017 18:59:29.505097  507073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:59:29.523709  507073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:59:29.621906  507073 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 18:59:29.621984  507073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 18:59:29.653673  507073 cri.go:89] found id: "621b748d538846f79bca883df087ce87a58a6e5cc5dbbb8f2ae4845785e122d6"
	I1017 18:59:29.653713  507073 cri.go:89] found id: "317712e1d5627e3d52413fcacd6f0a3e40e74b682567b7117acb2ddbf4da2a72"
	I1017 18:59:29.653726  507073 cri.go:89] found id: "c8951bd4e7631f9e6fa9ad944251500dd44cda63d891f7b553931aa3ef22e7e7"
	I1017 18:59:29.653729  507073 cri.go:89] found id: "6073132bac88bd54a1f9014aa1b74b68de0ac557ac483e5c0a7ff51ac939a2dd"
	I1017 18:59:29.653732  507073 cri.go:89] found id: "d3882b8636526fe7f302d3351b33fc68a8df5109463693af6642869704a2b6a2"
	I1017 18:59:29.653735  507073 cri.go:89] found id: "99fe19979e6f782cb4ff1df09e72c9c58535540daf8c28b63b4a3f1719cfa365"
	I1017 18:59:29.653738  507073 cri.go:89] found id: "600ce5e0b6a8556aa7c055afc13692cb00b2d6f0a82ba6d0817e5e424b49881c"
	I1017 18:59:29.653743  507073 cri.go:89] found id: "214596c066d6ebce81c069cda2c2790ee022d3770221e7e183390decc49e626b"
	I1017 18:59:29.653747  507073 cri.go:89] found id: "cd49fb8b1ee5ca17b886edf352059ec13c3aa8fed46c8383c6660656f0403d67"
	I1017 18:59:29.653754  507073 cri.go:89] found id: "b3f4b36a5cb43ddf3de225e5f08ad4b2d165ae6234c165082ae1316a48f48425"
	I1017 18:59:29.653757  507073 cri.go:89] found id: "fc47a341f594c2a4203992ac73a7c89fb4722c54e399f0949c3604dfa81f70ef"
	I1017 18:59:29.653761  507073 cri.go:89] found id: "d3140eef7e893c98db6a57843c26dd0767733610bfaab45265577ad3a64a334e"
	I1017 18:59:29.653765  507073 cri.go:89] found id: "bce8d27694469a014387080e94f416e3cfb88071ea69506ea8a1d04b16176e43"
	I1017 18:59:29.653769  507073 cri.go:89] found id: "26a77f9d8fd20d481d8ec7b0d85a65954b10af33ae4994293044ad2067b41872"
	I1017 18:59:29.653773  507073 cri.go:89] found id: "afa9f6b0496818adf412ab2c3cf979e86f2593796860b7b9e53c8fd85f0fe586"
	I1017 18:59:29.653782  507073 cri.go:89] found id: "ea8c7aa6a69f9b8476c9c28a6ac0944597fdf78921727f3c142c41a2b6a9bb00"
	I1017 18:59:29.653789  507073 cri.go:89] found id: "05b0d75fa7e337102c5b778d87f16ae508e704efb9367ba5a98cc93f0460d03c"
	I1017 18:59:29.653793  507073 cri.go:89] found id: "c8959e94a4c121db6d2c59fccf2f1725ca1521aca59330c8262847404ff4a854"
	I1017 18:59:29.653795  507073 cri.go:89] found id: "d6a7317aabf4df8eb271b0bf784be0c6045d3ed3d186ebfc5869cb018026ecfd"
	I1017 18:59:29.653798  507073 cri.go:89] found id: "49aea2d7818a2bf7202542ba97e6f5d99fd7c496045a1c05fbd5332046a05e6f"
	I1017 18:59:29.653800  507073 cri.go:89] found id: "43e40655463cffe530b5aa16eb8ff13e3891f57f9034e26ef39cd927af2c8e4a"
	I1017 18:59:29.653803  507073 cri.go:89] found id: "8b60fdbdcbbd68792cea1184b624381c87a1f1eed5a416aa91d0007baad72c0d"
	I1017 18:59:29.653805  507073 cri.go:89] found id: "44a3d62e9e439ad0c55eef8ceec2ced7e9b2897150b415717801bf2686765caa"
	I1017 18:59:29.653808  507073 cri.go:89] found id: "a76bbc48e30da642f43c612cdc6a0a786d2a6d1c4942a22be68e5c4a9a6f40f9"
	I1017 18:59:29.653810  507073 cri.go:89] found id: ""
	I1017 18:59:29.653882  507073 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 18:59:29.668580  507073 out.go:203] 
	W1017 18:59:29.669923  507073 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 18:59:29.669946  507073 out.go:285] * 
	* 
	W1017 18:59:29.674078  507073 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 18:59:29.675521  507073 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-642189 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.16s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-642189 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-642189 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-642189 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [91a308cc-0598-4607-b347-d02ef91745d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [91a308cc-0598-4607-b347-d02ef91745d9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [91a308cc-0598-4607-b347-d02ef91745d9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003653355s
addons_test.go:967: (dbg) Run:  kubectl --context addons-642189 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-642189 ssh "cat /opt/local-path-provisioner/pvc-b324b6e5-390e-427c-bd7c-84a9e595ad1f_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-642189 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-642189 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-642189 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-642189 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (245.193717ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 18:59:34.385423  508205 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:59:34.385708  508205 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:34.385720  508205 out.go:374] Setting ErrFile to fd 2...
	I1017 18:59:34.385727  508205 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:34.386022  508205 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 18:59:34.386366  508205 mustload.go:65] Loading cluster: addons-642189
	I1017 18:59:34.386870  508205 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:34.386895  508205 addons.go:606] checking whether the cluster is paused
	I1017 18:59:34.387404  508205 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:34.387435  508205 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:59:34.387927  508205 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:59:34.406234  508205 ssh_runner.go:195] Run: systemctl --version
	I1017 18:59:34.406312  508205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:59:34.424227  508205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:59:34.520940  508205 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 18:59:34.521053  508205 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 18:59:34.551609  508205 cri.go:89] found id: "621b748d538846f79bca883df087ce87a58a6e5cc5dbbb8f2ae4845785e122d6"
	I1017 18:59:34.551630  508205 cri.go:89] found id: "317712e1d5627e3d52413fcacd6f0a3e40e74b682567b7117acb2ddbf4da2a72"
	I1017 18:59:34.551634  508205 cri.go:89] found id: "c8951bd4e7631f9e6fa9ad944251500dd44cda63d891f7b553931aa3ef22e7e7"
	I1017 18:59:34.551637  508205 cri.go:89] found id: "6073132bac88bd54a1f9014aa1b74b68de0ac557ac483e5c0a7ff51ac939a2dd"
	I1017 18:59:34.551640  508205 cri.go:89] found id: "d3882b8636526fe7f302d3351b33fc68a8df5109463693af6642869704a2b6a2"
	I1017 18:59:34.551643  508205 cri.go:89] found id: "99fe19979e6f782cb4ff1df09e72c9c58535540daf8c28b63b4a3f1719cfa365"
	I1017 18:59:34.551646  508205 cri.go:89] found id: "600ce5e0b6a8556aa7c055afc13692cb00b2d6f0a82ba6d0817e5e424b49881c"
	I1017 18:59:34.551648  508205 cri.go:89] found id: "214596c066d6ebce81c069cda2c2790ee022d3770221e7e183390decc49e626b"
	I1017 18:59:34.551651  508205 cri.go:89] found id: "cd49fb8b1ee5ca17b886edf352059ec13c3aa8fed46c8383c6660656f0403d67"
	I1017 18:59:34.551656  508205 cri.go:89] found id: "b3f4b36a5cb43ddf3de225e5f08ad4b2d165ae6234c165082ae1316a48f48425"
	I1017 18:59:34.551659  508205 cri.go:89] found id: "fc47a341f594c2a4203992ac73a7c89fb4722c54e399f0949c3604dfa81f70ef"
	I1017 18:59:34.551662  508205 cri.go:89] found id: "d3140eef7e893c98db6a57843c26dd0767733610bfaab45265577ad3a64a334e"
	I1017 18:59:34.551664  508205 cri.go:89] found id: "bce8d27694469a014387080e94f416e3cfb88071ea69506ea8a1d04b16176e43"
	I1017 18:59:34.551666  508205 cri.go:89] found id: "26a77f9d8fd20d481d8ec7b0d85a65954b10af33ae4994293044ad2067b41872"
	I1017 18:59:34.551669  508205 cri.go:89] found id: "afa9f6b0496818adf412ab2c3cf979e86f2593796860b7b9e53c8fd85f0fe586"
	I1017 18:59:34.551673  508205 cri.go:89] found id: "ea8c7aa6a69f9b8476c9c28a6ac0944597fdf78921727f3c142c41a2b6a9bb00"
	I1017 18:59:34.551676  508205 cri.go:89] found id: "05b0d75fa7e337102c5b778d87f16ae508e704efb9367ba5a98cc93f0460d03c"
	I1017 18:59:34.551695  508205 cri.go:89] found id: "c8959e94a4c121db6d2c59fccf2f1725ca1521aca59330c8262847404ff4a854"
	I1017 18:59:34.551699  508205 cri.go:89] found id: "d6a7317aabf4df8eb271b0bf784be0c6045d3ed3d186ebfc5869cb018026ecfd"
	I1017 18:59:34.551703  508205 cri.go:89] found id: "49aea2d7818a2bf7202542ba97e6f5d99fd7c496045a1c05fbd5332046a05e6f"
	I1017 18:59:34.551707  508205 cri.go:89] found id: "43e40655463cffe530b5aa16eb8ff13e3891f57f9034e26ef39cd927af2c8e4a"
	I1017 18:59:34.551710  508205 cri.go:89] found id: "8b60fdbdcbbd68792cea1184b624381c87a1f1eed5a416aa91d0007baad72c0d"
	I1017 18:59:34.551715  508205 cri.go:89] found id: "44a3d62e9e439ad0c55eef8ceec2ced7e9b2897150b415717801bf2686765caa"
	I1017 18:59:34.551719  508205 cri.go:89] found id: "a76bbc48e30da642f43c612cdc6a0a786d2a6d1c4942a22be68e5c4a9a6f40f9"
	I1017 18:59:34.551733  508205 cri.go:89] found id: ""
	I1017 18:59:34.551781  508205 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 18:59:34.566646  508205 out.go:203] 
	W1017 18:59:34.567979  508205 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 18:59:34.568005  508205 out.go:285] * 
	* 
	W1017 18:59:34.574444  508205 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 18:59:34.577795  508205 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-642189 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.16s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-5272k" [f201ab4f-abad-46f2-a109-95004c7250f7] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004102422s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-642189 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-642189 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (252.716451ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 18:59:24.219046  506630 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:59:24.219302  506630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:24.219310  506630 out.go:374] Setting ErrFile to fd 2...
	I1017 18:59:24.219314  506630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:24.219515  506630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 18:59:24.219820  506630 mustload.go:65] Loading cluster: addons-642189
	I1017 18:59:24.220162  506630 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:24.220178  506630 addons.go:606] checking whether the cluster is paused
	I1017 18:59:24.220259  506630 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:24.220271  506630 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:59:24.220792  506630 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:59:24.242038  506630 ssh_runner.go:195] Run: systemctl --version
	I1017 18:59:24.242138  506630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:59:24.261067  506630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:59:24.358311  506630 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 18:59:24.358398  506630 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 18:59:24.393161  506630 cri.go:89] found id: "621b748d538846f79bca883df087ce87a58a6e5cc5dbbb8f2ae4845785e122d6"
	I1017 18:59:24.393187  506630 cri.go:89] found id: "317712e1d5627e3d52413fcacd6f0a3e40e74b682567b7117acb2ddbf4da2a72"
	I1017 18:59:24.393193  506630 cri.go:89] found id: "c8951bd4e7631f9e6fa9ad944251500dd44cda63d891f7b553931aa3ef22e7e7"
	I1017 18:59:24.393197  506630 cri.go:89] found id: "6073132bac88bd54a1f9014aa1b74b68de0ac557ac483e5c0a7ff51ac939a2dd"
	I1017 18:59:24.393202  506630 cri.go:89] found id: "d3882b8636526fe7f302d3351b33fc68a8df5109463693af6642869704a2b6a2"
	I1017 18:59:24.393208  506630 cri.go:89] found id: "99fe19979e6f782cb4ff1df09e72c9c58535540daf8c28b63b4a3f1719cfa365"
	I1017 18:59:24.393212  506630 cri.go:89] found id: "600ce5e0b6a8556aa7c055afc13692cb00b2d6f0a82ba6d0817e5e424b49881c"
	I1017 18:59:24.393216  506630 cri.go:89] found id: "214596c066d6ebce81c069cda2c2790ee022d3770221e7e183390decc49e626b"
	I1017 18:59:24.393221  506630 cri.go:89] found id: "cd49fb8b1ee5ca17b886edf352059ec13c3aa8fed46c8383c6660656f0403d67"
	I1017 18:59:24.393235  506630 cri.go:89] found id: "b3f4b36a5cb43ddf3de225e5f08ad4b2d165ae6234c165082ae1316a48f48425"
	I1017 18:59:24.393240  506630 cri.go:89] found id: "fc47a341f594c2a4203992ac73a7c89fb4722c54e399f0949c3604dfa81f70ef"
	I1017 18:59:24.393244  506630 cri.go:89] found id: "d3140eef7e893c98db6a57843c26dd0767733610bfaab45265577ad3a64a334e"
	I1017 18:59:24.393249  506630 cri.go:89] found id: "bce8d27694469a014387080e94f416e3cfb88071ea69506ea8a1d04b16176e43"
	I1017 18:59:24.393253  506630 cri.go:89] found id: "26a77f9d8fd20d481d8ec7b0d85a65954b10af33ae4994293044ad2067b41872"
	I1017 18:59:24.393258  506630 cri.go:89] found id: "afa9f6b0496818adf412ab2c3cf979e86f2593796860b7b9e53c8fd85f0fe586"
	I1017 18:59:24.393264  506630 cri.go:89] found id: "ea8c7aa6a69f9b8476c9c28a6ac0944597fdf78921727f3c142c41a2b6a9bb00"
	I1017 18:59:24.393272  506630 cri.go:89] found id: "05b0d75fa7e337102c5b778d87f16ae508e704efb9367ba5a98cc93f0460d03c"
	I1017 18:59:24.393278  506630 cri.go:89] found id: "c8959e94a4c121db6d2c59fccf2f1725ca1521aca59330c8262847404ff4a854"
	I1017 18:59:24.393283  506630 cri.go:89] found id: "d6a7317aabf4df8eb271b0bf784be0c6045d3ed3d186ebfc5869cb018026ecfd"
	I1017 18:59:24.393287  506630 cri.go:89] found id: "49aea2d7818a2bf7202542ba97e6f5d99fd7c496045a1c05fbd5332046a05e6f"
	I1017 18:59:24.393291  506630 cri.go:89] found id: "43e40655463cffe530b5aa16eb8ff13e3891f57f9034e26ef39cd927af2c8e4a"
	I1017 18:59:24.393295  506630 cri.go:89] found id: "8b60fdbdcbbd68792cea1184b624381c87a1f1eed5a416aa91d0007baad72c0d"
	I1017 18:59:24.393299  506630 cri.go:89] found id: "44a3d62e9e439ad0c55eef8ceec2ced7e9b2897150b415717801bf2686765caa"
	I1017 18:59:24.393303  506630 cri.go:89] found id: "a76bbc48e30da642f43c612cdc6a0a786d2a6d1c4942a22be68e5c4a9a6f40f9"
	I1017 18:59:24.393307  506630 cri.go:89] found id: ""
	I1017 18:59:24.393359  506630 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 18:59:24.409844  506630 out.go:203] 
	W1017 18:59:24.412433  506630 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 18:59:24.412460  506630 out.go:285] * 
	* 
	W1017 18:59:24.417289  506630 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 18:59:24.418665  506630 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-642189 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-76bx8" [9c877611-6296-4216-bca8-91cfe8ab131e] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004220195s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-642189 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-642189 addons disable yakd --alsologtostderr -v=1: exit status 11 (260.188624ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 18:59:25.831304  506820 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:59:25.831633  506820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:25.831645  506820 out.go:374] Setting ErrFile to fd 2...
	I1017 18:59:25.831651  506820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:25.831914  506820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 18:59:25.832219  506820 mustload.go:65] Loading cluster: addons-642189
	I1017 18:59:25.832627  506820 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:25.832649  506820 addons.go:606] checking whether the cluster is paused
	I1017 18:59:25.832789  506820 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:25.832808  506820 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:59:25.833361  506820 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:59:25.852855  506820 ssh_runner.go:195] Run: systemctl --version
	I1017 18:59:25.852925  506820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:59:25.873923  506820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:59:25.974161  506820 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 18:59:25.974241  506820 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 18:59:26.008656  506820 cri.go:89] found id: "621b748d538846f79bca883df087ce87a58a6e5cc5dbbb8f2ae4845785e122d6"
	I1017 18:59:26.008733  506820 cri.go:89] found id: "317712e1d5627e3d52413fcacd6f0a3e40e74b682567b7117acb2ddbf4da2a72"
	I1017 18:59:26.008740  506820 cri.go:89] found id: "c8951bd4e7631f9e6fa9ad944251500dd44cda63d891f7b553931aa3ef22e7e7"
	I1017 18:59:26.008745  506820 cri.go:89] found id: "6073132bac88bd54a1f9014aa1b74b68de0ac557ac483e5c0a7ff51ac939a2dd"
	I1017 18:59:26.008749  506820 cri.go:89] found id: "d3882b8636526fe7f302d3351b33fc68a8df5109463693af6642869704a2b6a2"
	I1017 18:59:26.008756  506820 cri.go:89] found id: "99fe19979e6f782cb4ff1df09e72c9c58535540daf8c28b63b4a3f1719cfa365"
	I1017 18:59:26.008760  506820 cri.go:89] found id: "600ce5e0b6a8556aa7c055afc13692cb00b2d6f0a82ba6d0817e5e424b49881c"
	I1017 18:59:26.008765  506820 cri.go:89] found id: "214596c066d6ebce81c069cda2c2790ee022d3770221e7e183390decc49e626b"
	I1017 18:59:26.008770  506820 cri.go:89] found id: "cd49fb8b1ee5ca17b886edf352059ec13c3aa8fed46c8383c6660656f0403d67"
	I1017 18:59:26.008794  506820 cri.go:89] found id: "b3f4b36a5cb43ddf3de225e5f08ad4b2d165ae6234c165082ae1316a48f48425"
	I1017 18:59:26.008802  506820 cri.go:89] found id: "fc47a341f594c2a4203992ac73a7c89fb4722c54e399f0949c3604dfa81f70ef"
	I1017 18:59:26.008806  506820 cri.go:89] found id: "d3140eef7e893c98db6a57843c26dd0767733610bfaab45265577ad3a64a334e"
	I1017 18:59:26.008810  506820 cri.go:89] found id: "bce8d27694469a014387080e94f416e3cfb88071ea69506ea8a1d04b16176e43"
	I1017 18:59:26.008827  506820 cri.go:89] found id: "26a77f9d8fd20d481d8ec7b0d85a65954b10af33ae4994293044ad2067b41872"
	I1017 18:59:26.008831  506820 cri.go:89] found id: "afa9f6b0496818adf412ab2c3cf979e86f2593796860b7b9e53c8fd85f0fe586"
	I1017 18:59:26.008849  506820 cri.go:89] found id: "ea8c7aa6a69f9b8476c9c28a6ac0944597fdf78921727f3c142c41a2b6a9bb00"
	I1017 18:59:26.008861  506820 cri.go:89] found id: "05b0d75fa7e337102c5b778d87f16ae508e704efb9367ba5a98cc93f0460d03c"
	I1017 18:59:26.008868  506820 cri.go:89] found id: "c8959e94a4c121db6d2c59fccf2f1725ca1521aca59330c8262847404ff4a854"
	I1017 18:59:26.008873  506820 cri.go:89] found id: "d6a7317aabf4df8eb271b0bf784be0c6045d3ed3d186ebfc5869cb018026ecfd"
	I1017 18:59:26.008876  506820 cri.go:89] found id: "49aea2d7818a2bf7202542ba97e6f5d99fd7c496045a1c05fbd5332046a05e6f"
	I1017 18:59:26.008879  506820 cri.go:89] found id: "43e40655463cffe530b5aa16eb8ff13e3891f57f9034e26ef39cd927af2c8e4a"
	I1017 18:59:26.008881  506820 cri.go:89] found id: "8b60fdbdcbbd68792cea1184b624381c87a1f1eed5a416aa91d0007baad72c0d"
	I1017 18:59:26.008885  506820 cri.go:89] found id: "44a3d62e9e439ad0c55eef8ceec2ced7e9b2897150b415717801bf2686765caa"
	I1017 18:59:26.008891  506820 cri.go:89] found id: "a76bbc48e30da642f43c612cdc6a0a786d2a6d1c4942a22be68e5c4a9a6f40f9"
	I1017 18:59:26.008895  506820 cri.go:89] found id: ""
	I1017 18:59:26.008953  506820 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 18:59:26.025468  506820 out.go:203] 
	W1017 18:59:26.026817  506820 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 18:59:26.026841  506820 out.go:285] * 
	* 
	W1017 18:59:26.031048  506820 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 18:59:26.032585  506820 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-642189 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.27s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-t48xm" [3156d3f4-4196-443e-86ea-eb10fdc988bc] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003894167s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-642189 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-642189 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (257.388028ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 18:59:24.226109  506631 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:59:24.226465  506631 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:24.226478  506631 out.go:374] Setting ErrFile to fd 2...
	I1017 18:59:24.226485  506631 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:24.226831  506631 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 18:59:24.227243  506631 mustload.go:65] Loading cluster: addons-642189
	I1017 18:59:24.227808  506631 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:24.227834  506631 addons.go:606] checking whether the cluster is paused
	I1017 18:59:24.227964  506631 config.go:182] Loaded profile config "addons-642189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:24.227984  506631 host.go:66] Checking if "addons-642189" exists ...
	I1017 18:59:24.228527  506631 cli_runner.go:164] Run: docker container inspect addons-642189 --format={{.State.Status}}
	I1017 18:59:24.249166  506631 ssh_runner.go:195] Run: systemctl --version
	I1017 18:59:24.249233  506631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642189
	I1017 18:59:24.268342  506631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/addons-642189/id_rsa Username:docker}
	I1017 18:59:24.364215  506631 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 18:59:24.364339  506631 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 18:59:24.399044  506631 cri.go:89] found id: "621b748d538846f79bca883df087ce87a58a6e5cc5dbbb8f2ae4845785e122d6"
	I1017 18:59:24.399069  506631 cri.go:89] found id: "317712e1d5627e3d52413fcacd6f0a3e40e74b682567b7117acb2ddbf4da2a72"
	I1017 18:59:24.399074  506631 cri.go:89] found id: "c8951bd4e7631f9e6fa9ad944251500dd44cda63d891f7b553931aa3ef22e7e7"
	I1017 18:59:24.399077  506631 cri.go:89] found id: "6073132bac88bd54a1f9014aa1b74b68de0ac557ac483e5c0a7ff51ac939a2dd"
	I1017 18:59:24.399080  506631 cri.go:89] found id: "d3882b8636526fe7f302d3351b33fc68a8df5109463693af6642869704a2b6a2"
	I1017 18:59:24.399083  506631 cri.go:89] found id: "99fe19979e6f782cb4ff1df09e72c9c58535540daf8c28b63b4a3f1719cfa365"
	I1017 18:59:24.399086  506631 cri.go:89] found id: "600ce5e0b6a8556aa7c055afc13692cb00b2d6f0a82ba6d0817e5e424b49881c"
	I1017 18:59:24.399088  506631 cri.go:89] found id: "214596c066d6ebce81c069cda2c2790ee022d3770221e7e183390decc49e626b"
	I1017 18:59:24.399095  506631 cri.go:89] found id: "cd49fb8b1ee5ca17b886edf352059ec13c3aa8fed46c8383c6660656f0403d67"
	I1017 18:59:24.399103  506631 cri.go:89] found id: "b3f4b36a5cb43ddf3de225e5f08ad4b2d165ae6234c165082ae1316a48f48425"
	I1017 18:59:24.399106  506631 cri.go:89] found id: "fc47a341f594c2a4203992ac73a7c89fb4722c54e399f0949c3604dfa81f70ef"
	I1017 18:59:24.399108  506631 cri.go:89] found id: "d3140eef7e893c98db6a57843c26dd0767733610bfaab45265577ad3a64a334e"
	I1017 18:59:24.399110  506631 cri.go:89] found id: "bce8d27694469a014387080e94f416e3cfb88071ea69506ea8a1d04b16176e43"
	I1017 18:59:24.399113  506631 cri.go:89] found id: "26a77f9d8fd20d481d8ec7b0d85a65954b10af33ae4994293044ad2067b41872"
	I1017 18:59:24.399115  506631 cri.go:89] found id: "afa9f6b0496818adf412ab2c3cf979e86f2593796860b7b9e53c8fd85f0fe586"
	I1017 18:59:24.399124  506631 cri.go:89] found id: "ea8c7aa6a69f9b8476c9c28a6ac0944597fdf78921727f3c142c41a2b6a9bb00"
	I1017 18:59:24.399130  506631 cri.go:89] found id: "05b0d75fa7e337102c5b778d87f16ae508e704efb9367ba5a98cc93f0460d03c"
	I1017 18:59:24.399134  506631 cri.go:89] found id: "c8959e94a4c121db6d2c59fccf2f1725ca1521aca59330c8262847404ff4a854"
	I1017 18:59:24.399137  506631 cri.go:89] found id: "d6a7317aabf4df8eb271b0bf784be0c6045d3ed3d186ebfc5869cb018026ecfd"
	I1017 18:59:24.399139  506631 cri.go:89] found id: "49aea2d7818a2bf7202542ba97e6f5d99fd7c496045a1c05fbd5332046a05e6f"
	I1017 18:59:24.399142  506631 cri.go:89] found id: "43e40655463cffe530b5aa16eb8ff13e3891f57f9034e26ef39cd927af2c8e4a"
	I1017 18:59:24.399144  506631 cri.go:89] found id: "8b60fdbdcbbd68792cea1184b624381c87a1f1eed5a416aa91d0007baad72c0d"
	I1017 18:59:24.399146  506631 cri.go:89] found id: "44a3d62e9e439ad0c55eef8ceec2ced7e9b2897150b415717801bf2686765caa"
	I1017 18:59:24.399149  506631 cri.go:89] found id: "a76bbc48e30da642f43c612cdc6a0a786d2a6d1c4942a22be68e5c4a9a6f40f9"
	I1017 18:59:24.399151  506631 cri.go:89] found id: ""
	I1017 18:59:24.399193  506631 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 18:59:24.415123  506631 out.go:203] 
	W1017 18:59:24.416376  506631 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 18:59:24.416393  506631 out.go:285] * 
	* 
	W1017 18:59:24.421485  506631 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 18:59:24.423028  506631 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-642189 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-397448 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-397448 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-67h2c" [d0777f5d-4105-4c02-9b00-9efe368f46d3] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-397448 -n functional-397448
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-17 19:15:04.062573477 +0000 UTC m=+1131.178123254
functional_test.go:1645: (dbg) Run:  kubectl --context functional-397448 describe po hello-node-connect-7d85dfc575-67h2c -n default
functional_test.go:1645: (dbg) kubectl --context functional-397448 describe po hello-node-connect-7d85dfc575-67h2c -n default:
Name:             hello-node-connect-7d85dfc575-67h2c
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-397448/192.168.49.2
Start Time:       Fri, 17 Oct 2025 19:05:03 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cgf89 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-cgf89:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-67h2c to functional-397448
Normal   Pulling    7m3s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m3s (x5 over 9m58s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m3s (x5 over 9m58s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m46s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m46s (x21 over 9m57s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-397448 logs hello-node-connect-7d85dfc575-67h2c -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-397448 logs hello-node-connect-7d85dfc575-67h2c -n default: exit status 1 (65.470099ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-67h2c" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-397448 logs hello-node-connect-7d85dfc575-67h2c -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-397448 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-67h2c
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-397448/192.168.49.2
Start Time:       Fri, 17 Oct 2025 19:05:03 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cgf89 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-cgf89:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-67h2c to functional-397448
Normal   Pulling    7m3s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m3s (x5 over 9m58s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m3s (x5 over 9m58s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m46s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m46s (x21 over 9m57s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-397448 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-397448 logs -l app=hello-node-connect: exit status 1 (66.766164ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-67h2c" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-397448 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-397448 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.106.72.217
IPs:                      10.106.72.217
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30365/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-397448
helpers_test.go:243: (dbg) docker inspect functional-397448:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1e4444e727ff754ee4009c0e44d1865e7dda0d2a56113989252a72a7d481a144",
	        "Created": "2025-10-17T19:03:14.40452184Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 519970,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:03:14.43944392Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/1e4444e727ff754ee4009c0e44d1865e7dda0d2a56113989252a72a7d481a144/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1e4444e727ff754ee4009c0e44d1865e7dda0d2a56113989252a72a7d481a144/hostname",
	        "HostsPath": "/var/lib/docker/containers/1e4444e727ff754ee4009c0e44d1865e7dda0d2a56113989252a72a7d481a144/hosts",
	        "LogPath": "/var/lib/docker/containers/1e4444e727ff754ee4009c0e44d1865e7dda0d2a56113989252a72a7d481a144/1e4444e727ff754ee4009c0e44d1865e7dda0d2a56113989252a72a7d481a144-json.log",
	        "Name": "/functional-397448",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-397448:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-397448",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1e4444e727ff754ee4009c0e44d1865e7dda0d2a56113989252a72a7d481a144",
	                "LowerDir": "/var/lib/docker/overlay2/8a550e6e50e8682de4e6f5e140831a628b56017695dc6c3d5d7d013160e6cf6f-init/diff:/var/lib/docker/overlay2/dbfb6a42e05d15debefb7c829b0dbabbe558b70da40f1ab4f30d27e0dda96088/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8a550e6e50e8682de4e6f5e140831a628b56017695dc6c3d5d7d013160e6cf6f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8a550e6e50e8682de4e6f5e140831a628b56017695dc6c3d5d7d013160e6cf6f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8a550e6e50e8682de4e6f5e140831a628b56017695dc6c3d5d7d013160e6cf6f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-397448",
	                "Source": "/var/lib/docker/volumes/functional-397448/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-397448",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-397448",
	                "name.minikube.sigs.k8s.io": "functional-397448",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a6b185094528b0b42cc919e719a723073ffec026a2ffab8a18d88a7f13fddf82",
	            "SandboxKey": "/var/run/docker/netns/a6b185094528",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-397448": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:12:7a:86:38:b7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e504909a8f1cd44c42be8d16eda8093b2d9dbf53a2e9e925cf689395215a4858",
	                    "EndpointID": "eb649947a4b141669ee3e4bc08192d28c400f589dfda0a4c8b401b9d494a3465",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-397448",
	                        "1e4444e727ff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-397448 -n functional-397448
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-397448 logs -n 25: (1.363995205s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-397448 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1406038563/001:/mount1 --alsologtostderr -v=1 │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │                     │
	│ mount          │ -p functional-397448 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1406038563/001:/mount2 --alsologtostderr -v=1 │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │                     │
	│ tunnel         │ functional-397448 tunnel --alsologtostderr                                                                         │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │                     │
	│ ssh            │ functional-397448 ssh findmnt -T /mount1                                                                           │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ ssh            │ functional-397448 ssh findmnt -T /mount2                                                                           │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ ssh            │ functional-397448 ssh findmnt -T /mount3                                                                           │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ mount          │ -p functional-397448 --kill=true                                                                                   │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-397448 --alsologtostderr -v=1                                                     │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ start          │ -p functional-397448 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │                     │
	│ start          │ -p functional-397448 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │                     │
	│ update-context │ functional-397448 update-context --alsologtostderr -v=2                                                            │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ update-context │ functional-397448 update-context --alsologtostderr -v=2                                                            │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ update-context │ functional-397448 update-context --alsologtostderr -v=2                                                            │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ image          │ functional-397448 image ls --format short --alsologtostderr                                                        │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ image          │ functional-397448 image ls --format yaml --alsologtostderr                                                         │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ ssh            │ functional-397448 ssh pgrep buildkitd                                                                              │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │                     │
	│ image          │ functional-397448 image ls --format json --alsologtostderr                                                         │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ image          │ functional-397448 image ls --format table --alsologtostderr                                                        │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ image          │ functional-397448 image build -t localhost/my-image:functional-397448 testdata/build --alsologtostderr             │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ image          │ functional-397448 image ls                                                                                         │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ service        │ functional-397448 service list                                                                                     │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:14 UTC │ 17 Oct 25 19:14 UTC │
	│ service        │ functional-397448 service list -o json                                                                             │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:14 UTC │ 17 Oct 25 19:14 UTC │
	│ service        │ functional-397448 service --namespace=default --https --url hello-node                                             │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:14 UTC │                     │
	│ service        │ functional-397448 service hello-node --url --format={{.IP}}                                                        │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:14 UTC │                     │
	│ service        │ functional-397448 service hello-node --url                                                                         │ functional-397448 │ jenkins │ v1.37.0 │ 17 Oct 25 19:14 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:05:32
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:05:32.397844  535624 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:05:32.398086  535624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:05:32.398094  535624 out.go:374] Setting ErrFile to fd 2...
	I1017 19:05:32.398104  535624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:05:32.398329  535624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:05:32.398794  535624 out.go:368] Setting JSON to false
	I1017 19:05:32.399809  535624 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10071,"bootTime":1760717861,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:05:32.399920  535624 start.go:141] virtualization: kvm guest
	I1017 19:05:32.401780  535624 out.go:179] * [functional-397448] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:05:32.403220  535624 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:05:32.403239  535624 notify.go:220] Checking for updates...
	I1017 19:05:32.405828  535624 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:05:32.407149  535624 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:05:32.408269  535624 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 19:05:32.409344  535624 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:05:32.410474  535624 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:05:32.412194  535624 config.go:182] Loaded profile config "functional-397448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:05:32.412760  535624 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:05:32.440763  535624 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:05:32.440915  535624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:05:32.503085  535624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-17 19:05:32.492518081 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:05:32.503202  535624 docker.go:318] overlay module found
	I1017 19:05:32.504930  535624 out.go:179] * Using the docker driver based on existing profile
	I1017 19:05:32.506351  535624 start.go:305] selected driver: docker
	I1017 19:05:32.506371  535624 start.go:925] validating driver "docker" against &{Name:functional-397448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-397448 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:05:32.506462  535624 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:05:32.506567  535624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:05:32.569211  535624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-17 19:05:32.558218072 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:05:32.570219  535624 cni.go:84] Creating CNI manager for ""
	I1017 19:05:32.570297  535624 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:05:32.570367  535624 start.go:349] cluster config:
	{Name:functional-397448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-397448 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:05:32.572233  535624 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 17 19:05:27 functional-397448 crio[3603]: time="2025-10-17T19:05:27.341232621Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:05:27 functional-397448 crio[3603]: time="2025-10-17T19:05:27.3414262Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ee7f2480b62f77bc2e406ea6e5ffbb888a109139e7fdfd6958c0ba5269b2e72e/merged/etc/group: no such file or directory"
	Oct 17 19:05:27 functional-397448 crio[3603]: time="2025-10-17T19:05:27.341773352Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:05:27 functional-397448 crio[3603]: time="2025-10-17T19:05:27.370503181Z" level=info msg="Created container 1e5340fef8f6f7d061b5c1339ce977d443ff00c7f0ff2e510658b89103ae4a2b: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-69hjs/dashboard-metrics-scraper" id=bcf236d4-faba-4471-a06e-767fc89fe7ad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:05:27 functional-397448 crio[3603]: time="2025-10-17T19:05:27.371218926Z" level=info msg="Starting container: 1e5340fef8f6f7d061b5c1339ce977d443ff00c7f0ff2e510658b89103ae4a2b" id=14ba2b0b-2d38-49d8-b01a-85bb81753e6e name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:05:27 functional-397448 crio[3603]: time="2025-10-17T19:05:27.373341352Z" level=info msg="Started container" PID=7462 containerID=1e5340fef8f6f7d061b5c1339ce977d443ff00c7f0ff2e510658b89103ae4a2b description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-69hjs/dashboard-metrics-scraper id=14ba2b0b-2d38-49d8-b01a-85bb81753e6e name=/runtime.v1.RuntimeService/StartContainer sandboxID=648cf3207790a79c6a2653d5aa0bcdb74f8d46040a11075584cb206e739ac3ea
	Oct 17 19:05:30 functional-397448 crio[3603]: time="2025-10-17T19:05:30.76883621Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=d7627360-f992-4e41-8712-750ea5e0f9c6 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:05:30 functional-397448 crio[3603]: time="2025-10-17T19:05:30.76960047Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=6e6e471d-7b00-4ac1-a9a2-7cce630514d9 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:05:30 functional-397448 crio[3603]: time="2025-10-17T19:05:30.771568647Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=df4971f4-af77-4e90-aead-cd953443de25 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:05:30 functional-397448 crio[3603]: time="2025-10-17T19:05:30.785660269Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qnf6v/kubernetes-dashboard" id=ecd26bbb-a956-4665-b7a4-ac43ebdaf344 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:05:30 functional-397448 crio[3603]: time="2025-10-17T19:05:30.786478546Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:05:30 functional-397448 crio[3603]: time="2025-10-17T19:05:30.819183973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:05:30 functional-397448 crio[3603]: time="2025-10-17T19:05:30.819435186Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/98aeb429fcc32990ec560d15957673005945ec7879c13ecbbbc90be4356f1375/merged/etc/group: no such file or directory"
	Oct 17 19:05:30 functional-397448 crio[3603]: time="2025-10-17T19:05:30.819912146Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:05:30 functional-397448 crio[3603]: time="2025-10-17T19:05:30.893601093Z" level=info msg="Created container 6def454f41a1ab86211a711e56509410de6ad16d33a840400876a5ff399c0c71: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qnf6v/kubernetes-dashboard" id=ecd26bbb-a956-4665-b7a4-ac43ebdaf344 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:05:30 functional-397448 crio[3603]: time="2025-10-17T19:05:30.894577853Z" level=info msg="Starting container: 6def454f41a1ab86211a711e56509410de6ad16d33a840400876a5ff399c0c71" id=5a009698-49a5-43e3-be94-8b21ca850f7e name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:05:30 functional-397448 crio[3603]: time="2025-10-17T19:05:30.897238725Z" level=info msg="Started container" PID=7519 containerID=6def454f41a1ab86211a711e56509410de6ad16d33a840400876a5ff399c0c71 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qnf6v/kubernetes-dashboard id=5a009698-49a5-43e3-be94-8b21ca850f7e name=/runtime.v1.RuntimeService/StartContainer sandboxID=edcf2af81bbd3cfefa2ecfa337095aac21b89476b99c01cbd193270a2b854f9e
	Oct 17 19:05:35 functional-397448 crio[3603]: time="2025-10-17T19:05:35.227974447Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=88835a4e-d4e6-4b57-9556-36fb77039b18 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:05:45 functional-397448 crio[3603]: time="2025-10-17T19:05:45.228938804Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c06a3887-7aab-49bb-b4ba-3162c31ec7bd name=/runtime.v1.ImageService/PullImage
	Oct 17 19:06:22 functional-397448 crio[3603]: time="2025-10-17T19:06:22.228509243Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9014ba64-70e2-4926-8579-f30bcf6f76ba name=/runtime.v1.ImageService/PullImage
	Oct 17 19:06:37 functional-397448 crio[3603]: time="2025-10-17T19:06:37.228650283Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8e3bc3b7-e519-4e45-ab47-5ab5a8cf3e03 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:07:57 functional-397448 crio[3603]: time="2025-10-17T19:07:57.228942632Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=84597401-ded1-43f5-9edf-5b1bafea7246 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:08:01 functional-397448 crio[3603]: time="2025-10-17T19:08:01.228930358Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1fa499ca-d6d7-4a46-b3fc-a43fe25576f0 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:10:38 functional-397448 crio[3603]: time="2025-10-17T19:10:38.228577916Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=49eca1bf-0bcf-491d-86fb-ca08d9dab8d4 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:10:43 functional-397448 crio[3603]: time="2025-10-17T19:10:43.228411088Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f2b3d18e-61a9-417f-b795-f608bac3830c name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	6def454f41a1a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   edcf2af81bbd3       kubernetes-dashboard-855c9754f9-qnf6v        kubernetes-dashboard
	1e5340fef8f6f       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   648cf3207790a       dashboard-metrics-scraper-77bf4d6c4c-69hjs   kubernetes-dashboard
	de112833fec4e       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                  9 minutes ago       Running             nginx                       0                   b300539b1c1a4       nginx-svc                                    default
	a5a4dd66bd985       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   96e2cf9711562       busybox-mount                                default
	7ee5621ed25e3       docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115                  9 minutes ago       Running             myfrontend                  0                   34e6a1d1024c8       sp-pod                                       default
	95214fa02dc95       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  10 minutes ago      Running             mysql                       0                   e4fee9c52b754       mysql-5bb876957f-xhhl6                       default
	d099581ad9284       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   c2d1ecffad405       kube-apiserver-functional-397448             kube-system
	fdf6f9530423a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   f19e283d64585       kube-scheduler-functional-397448             kube-system
	3c89032d52bc4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     2                   059db1390e484       kube-controller-manager-functional-397448    kube-system
	2546decf76f43       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   c2fc150c9dd88       etcd-functional-397448                       kube-system
	8a82e255144ce       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   54c7c88de22f9       kindnet-mnd5j                                kube-system
	0b4f0c950c824       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 10 minutes ago      Running             kube-proxy                  1                   a40586c4ed84b       kube-proxy-vczbp                             kube-system
	7a2b9cf3a7b6f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Exited              kube-controller-manager     1                   059db1390e484       kube-controller-manager-functional-397448    kube-system
	89737a83d3dd4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         1                   2cba9afa7c18c       storage-provisioner                          kube-system
	4602154bcf4d6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     1                   79be9113bb84f       coredns-66bc5c9577-hgk7b                     kube-system
	20015dbafef84       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   79be9113bb84f       coredns-66bc5c9577-hgk7b                     kube-system
	0d9d152051786       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   2cba9afa7c18c       storage-provisioner                          kube-system
	4be829bfb408e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   a40586c4ed84b       kube-proxy-vczbp                             kube-system
	addc4b5c73664       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   54c7c88de22f9       kindnet-mnd5j                                kube-system
	8dff6c024890a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   f19e283d64585       kube-scheduler-functional-397448             kube-system
	08a093c35c3c3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   c2fc150c9dd88       etcd-functional-397448                       kube-system
	
	
	==> coredns [20015dbafef84f87105cece158a8391195039164324a70b000b5d96de808d5a6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40889 - 33599 "HINFO IN 4755913506343606293.8839631820753260766. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.463987297s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4602154bcf4d61122f0496a58d57d8492ae2f9ac5b9e9dc12210338484a32532] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53744 - 19784 "HINFO IN 3606013369711767435.6702171796740999639. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.091916738s
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               functional-397448
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-397448
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=functional-397448
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_03_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:03:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-397448
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:15:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:15:02 +0000   Fri, 17 Oct 2025 19:03:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:15:02 +0000   Fri, 17 Oct 2025 19:03:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:15:02 +0000   Fri, 17 Oct 2025 19:03:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:15:02 +0000   Fri, 17 Oct 2025 19:03:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-397448
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                66f16210-a3cd-4719-a4a1-a83fa3ad067b
	  Boot ID:                    c8616e78-d085-41cd-a329-f2bcfd9cfa15
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-8698w                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-67h2c           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-xhhl6                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m52s
	  kube-system                 coredns-66bc5c9577-hgk7b                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-397448                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-mnd5j                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-397448              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-397448     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-vczbp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-397448              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-69hjs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-qnf6v         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-397448 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-397448 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-397448 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-397448 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-397448 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-397448 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-397448 event: Registered Node functional-397448 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-397448 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x9 over 10m)  kubelet          Node functional-397448 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-397448 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-397448 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-397448 event: Registered Node functional-397448 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 d1 49 91 03 c2 08 06
	[  +0.000804] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 a9 2b 44 da ae 08 06
	[Oct17 18:59] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022229] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023876] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.024898] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +2.047801] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +4.031525] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:00] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +16.382262] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +32.252567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	
	
	==> etcd [08a093c35c3c3dad10b1192205e660048ed162f71bc50f3e5b63c4f3168407e8] <==
	{"level":"warn","ts":"2025-10-17T19:03:29.101826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:03:29.108914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:03:29.115666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:03:29.129011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:03:29.136167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:03:29.142549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:03:29.199048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45476","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T19:04:25.854275Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-17T19:04:25.854381Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-397448","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-17T19:04:25.854481Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-17T19:04:25.856026Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-17T19:04:25.856114Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T19:04:25.856148Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-17T19:04:25.856231Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-17T19:04:25.856248Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-17T19:04:25.856256Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-17T19:04:25.856290Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-17T19:04:25.856266Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-17T19:04:25.856307Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-17T19:04:25.856313Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-17T19:04:25.856348Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T19:04:25.858820Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-17T19:04:25.858889Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T19:04:25.858922Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-17T19:04:25.858933Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-397448","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [2546decf76f4334fdded85e8bb915bd61ef2ab9a6a11672f657723d9b52f2c4d] <==
	{"level":"warn","ts":"2025-10-17T19:04:28.780894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:04:28.792647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:04:28.800211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:04:28.808605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:04:28.816539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:04:28.824089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:04:28.831774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:04:28.839178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:04:28.846769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:04:28.853358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:04:28.859950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:04:28.873237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:04:28.880158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:04:28.887627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:04:28.894575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:04:28.902819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:04:28.911420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:04:28.918122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:04:28.931831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:04:28.939176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:04:28.945895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:04:29.000399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55656","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T19:14:28.471794Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1161}
	{"level":"info","ts":"2025-10-17T19:14:28.493385Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1161,"took":"21.207178ms","hash":428818042,"current-db-size-bytes":3543040,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1654784,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-10-17T19:14:28.493437Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":428818042,"revision":1161,"compact-revision":-1}
	
	
	==> kernel <==
	 19:15:05 up  2:57,  0 user,  load average: 0.17, 0.23, 0.48
	Linux functional-397448 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8a82e255144ce2c6d83e7ff8c1f9d6348d76c63ffd1e8e445eaa43a53290f610] <==
	I1017 19:12:56.437055       1 main.go:301] handling current node
	I1017 19:13:06.427317       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:13:06.427366       1 main.go:301] handling current node
	I1017 19:13:16.428797       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:13:16.428838       1 main.go:301] handling current node
	I1017 19:13:26.427389       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:13:26.427437       1 main.go:301] handling current node
	I1017 19:13:36.427419       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:13:36.427458       1 main.go:301] handling current node
	I1017 19:13:46.428293       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:13:46.428333       1 main.go:301] handling current node
	I1017 19:13:56.428827       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:13:56.428869       1 main.go:301] handling current node
	I1017 19:14:06.427588       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:14:06.427623       1 main.go:301] handling current node
	I1017 19:14:16.435820       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:14:16.435857       1 main.go:301] handling current node
	I1017 19:14:26.428316       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:14:26.428360       1 main.go:301] handling current node
	I1017 19:14:36.427792       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:14:36.427838       1 main.go:301] handling current node
	I1017 19:14:46.427853       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:14:46.427905       1 main.go:301] handling current node
	I1017 19:14:56.428006       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:14:56.428071       1 main.go:301] handling current node
	
	
	==> kindnet [addc4b5c736647852282f4dc748682dfea095c0d266e907b031dcb8cfd9fe968] <==
	I1017 19:03:38.543076       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 19:03:38.543422       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1017 19:03:38.543604       1 main.go:148] setting mtu 1500 for CNI 
	I1017 19:03:38.543623       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 19:03:38.543654       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T19:03:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 19:03:38.936496       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 19:03:38.936543       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 19:03:38.936559       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:03:39.037121       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 19:03:39.236998       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:03:39.237026       1 metrics.go:72] Registering metrics
	I1017 19:03:39.237104       1 controller.go:711] "Syncing nftables rules"
	I1017 19:03:48.744169       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:03:48.744224       1 main.go:301] handling current node
	I1017 19:03:58.750939       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:03:58.750982       1 main.go:301] handling current node
	I1017 19:04:08.747797       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:04:08.747829       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d099581ad9284561237225a724aed2bd055f13f25683465b5a47e7e6051feaea] <==
	I1017 19:04:30.334610       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 19:04:30.334610       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 19:04:30.383770       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1017 19:04:30.589212       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1017 19:04:30.590703       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:04:30.595339       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 19:04:31.097442       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 19:04:31.195895       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 19:04:31.253772       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 19:04:31.260768       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 19:04:32.822466       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 19:04:49.547646       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.103.101.151"}
	I1017 19:04:53.531470       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.99.161.109"}
	I1017 19:04:57.360672       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.92.179"}
	I1017 19:05:03.735044       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.72.217"}
	E1017 19:05:10.498005       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38302: use of closed network connection
	E1017 19:05:11.337839       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38316: use of closed network connection
	E1017 19:05:13.018603       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38342: use of closed network connection
	E1017 19:05:13.181882       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38352: use of closed network connection
	E1017 19:05:21.656808       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45590: use of closed network connection
	I1017 19:05:23.340000       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.153.78"}
	I1017 19:05:25.498069       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 19:05:25.632117       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.225.12"}
	I1017 19:05:25.646635       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.52.147"}
	I1017 19:14:29.416060       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [3c89032d52bc4ccde2d9cac6a55e9d7a461c1a90b87e7826c45808550136adfb] <==
	I1017 19:04:32.816375       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 19:04:32.816476       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 19:04:32.817063       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 19:04:32.817133       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 19:04:32.819141       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1017 19:04:32.820866       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 19:04:32.820982       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 19:04:32.821053       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-397448"
	I1017 19:04:32.821119       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:04:32.821110       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 19:04:32.821222       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 19:04:32.824375       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 19:04:32.826269       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 19:04:32.827725       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 19:04:32.828566       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1017 19:04:32.831815       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 19:04:32.831934       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 19:04:32.834140       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 19:04:32.839814       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1017 19:05:25.552084       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1017 19:05:25.556444       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1017 19:05:25.558277       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1017 19:05:25.562822       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1017 19:05:25.562838       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1017 19:05:25.567172       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [7a2b9cf3a7b6f24042ce6d06854ed452d5b73f091a56bbfdc75ab905d1b3ee39] <==
	I1017 19:04:16.405569       1 serving.go:386] Generated self-signed cert in-memory
	I1017 19:04:17.171407       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1017 19:04:17.171434       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:04:17.172898       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1017 19:04:17.172957       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1017 19:04:17.173322       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1017 19:04:17.173426       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1017 19:04:27.176061       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [0b4f0c950c824f6f17989a9eb7128ac196411e78b393c2f1b6c543d3c9c2dce2] <==
	I1017 19:04:16.148662       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1017 19:04:16.150643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-397448&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:04:17.708099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-397448&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:04:20.896108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-397448&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:04:26.689789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-397448&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1017 19:04:33.348848       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:04:33.348891       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 19:04:33.348985       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:04:33.369105       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:04:33.369177       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:04:33.374996       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:04:33.375365       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:04:33.375414       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:04:33.376545       1 config.go:200] "Starting service config controller"
	I1017 19:04:33.376568       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:04:33.376619       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:04:33.376627       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:04:33.376674       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:04:33.376718       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:04:33.376771       1 config.go:309] "Starting node config controller"
	I1017 19:04:33.376779       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:04:33.376786       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:04:33.477203       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 19:04:33.477315       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:04:33.477341       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [4be829bfb408e480df1e9f1c17e4911d3ea82f920eef97a13e13534f53ef4e3b] <==
	I1017 19:03:38.428971       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:03:38.495013       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:03:38.595188       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:03:38.595230       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 19:03:38.595324       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:03:38.615799       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:03:38.615863       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:03:38.621954       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:03:38.622463       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:03:38.622502       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:03:38.623897       1 config.go:200] "Starting service config controller"
	I1017 19:03:38.623932       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:03:38.623990       1 config.go:309] "Starting node config controller"
	I1017 19:03:38.624001       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:03:38.624086       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:03:38.624101       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:03:38.624121       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:03:38.624144       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:03:38.724177       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:03:38.724250       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:03:38.724286       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 19:03:38.724282       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8dff6c024890a6e609e75327af8a1e5de3c5eebca326d9ba3aba76a6b6815a2f] <==
	E1017 19:03:29.666524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 19:03:29.666544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:03:29.666550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:03:29.666584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:03:29.666641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 19:03:29.666716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:03:30.489941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:03:30.583005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1017 19:03:30.630349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:03:30.672284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:03:30.673138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:03:30.716723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 19:03:30.731961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 19:03:30.739119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:03:30.851859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:03:30.861154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 19:03:30.901577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:03:30.923849       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1017 19:03:33.763092       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:04:25.745198       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1017 19:04:25.745176       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:04:25.745252       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1017 19:04:25.745307       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1017 19:04:25.745346       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1017 19:04:25.745382       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fdf6f9530423a1aab2af869edccc353bf37d2923189a3fcc82a146d1c67146cd] <==
	I1017 19:04:28.178060       1 serving.go:386] Generated self-signed cert in-memory
	W1017 19:04:29.413187       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 19:04:29.413229       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 19:04:29.413243       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 19:04:29.413255       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 19:04:29.431376       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 19:04:29.431403       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:04:29.433617       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:04:29.433655       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:04:29.433915       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 19:04:29.433980       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 19:04:29.533845       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:12:34 functional-397448 kubelet[4146]: E1017 19:12:34.227770    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-67h2c" podUID="d0777f5d-4105-4c02-9b00-9efe368f46d3"
	Oct 17 19:12:41 functional-397448 kubelet[4146]: E1017 19:12:41.228667    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8698w" podUID="4426ac7a-ace1-431f-a007-fff351fc07f8"
	Oct 17 19:12:45 functional-397448 kubelet[4146]: E1017 19:12:45.227601    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-67h2c" podUID="d0777f5d-4105-4c02-9b00-9efe368f46d3"
	Oct 17 19:12:53 functional-397448 kubelet[4146]: E1017 19:12:53.228483    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8698w" podUID="4426ac7a-ace1-431f-a007-fff351fc07f8"
	Oct 17 19:12:58 functional-397448 kubelet[4146]: E1017 19:12:58.227779    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-67h2c" podUID="d0777f5d-4105-4c02-9b00-9efe368f46d3"
	Oct 17 19:13:04 functional-397448 kubelet[4146]: E1017 19:13:04.228492    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8698w" podUID="4426ac7a-ace1-431f-a007-fff351fc07f8"
	Oct 17 19:13:09 functional-397448 kubelet[4146]: E1017 19:13:09.227892    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-67h2c" podUID="d0777f5d-4105-4c02-9b00-9efe368f46d3"
	Oct 17 19:13:17 functional-397448 kubelet[4146]: E1017 19:13:17.228559    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8698w" podUID="4426ac7a-ace1-431f-a007-fff351fc07f8"
	Oct 17 19:13:20 functional-397448 kubelet[4146]: E1017 19:13:20.228416    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-67h2c" podUID="d0777f5d-4105-4c02-9b00-9efe368f46d3"
	Oct 17 19:13:28 functional-397448 kubelet[4146]: E1017 19:13:28.228411    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8698w" podUID="4426ac7a-ace1-431f-a007-fff351fc07f8"
	Oct 17 19:13:32 functional-397448 kubelet[4146]: E1017 19:13:32.228118    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-67h2c" podUID="d0777f5d-4105-4c02-9b00-9efe368f46d3"
	Oct 17 19:13:41 functional-397448 kubelet[4146]: E1017 19:13:41.227948    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8698w" podUID="4426ac7a-ace1-431f-a007-fff351fc07f8"
	Oct 17 19:13:47 functional-397448 kubelet[4146]: E1017 19:13:47.229802    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-67h2c" podUID="d0777f5d-4105-4c02-9b00-9efe368f46d3"
	Oct 17 19:13:54 functional-397448 kubelet[4146]: E1017 19:13:54.227599    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8698w" podUID="4426ac7a-ace1-431f-a007-fff351fc07f8"
	Oct 17 19:14:00 functional-397448 kubelet[4146]: E1017 19:14:00.227706    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-67h2c" podUID="d0777f5d-4105-4c02-9b00-9efe368f46d3"
	Oct 17 19:14:05 functional-397448 kubelet[4146]: E1017 19:14:05.228016    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8698w" podUID="4426ac7a-ace1-431f-a007-fff351fc07f8"
	Oct 17 19:14:11 functional-397448 kubelet[4146]: E1017 19:14:11.227791    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-67h2c" podUID="d0777f5d-4105-4c02-9b00-9efe368f46d3"
	Oct 17 19:14:17 functional-397448 kubelet[4146]: E1017 19:14:17.229985    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8698w" podUID="4426ac7a-ace1-431f-a007-fff351fc07f8"
	Oct 17 19:14:23 functional-397448 kubelet[4146]: E1017 19:14:23.228372    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-67h2c" podUID="d0777f5d-4105-4c02-9b00-9efe368f46d3"
	Oct 17 19:14:29 functional-397448 kubelet[4146]: E1017 19:14:29.230218    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8698w" podUID="4426ac7a-ace1-431f-a007-fff351fc07f8"
	Oct 17 19:14:35 functional-397448 kubelet[4146]: E1017 19:14:35.230293    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-67h2c" podUID="d0777f5d-4105-4c02-9b00-9efe368f46d3"
	Oct 17 19:14:43 functional-397448 kubelet[4146]: E1017 19:14:43.227916    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8698w" podUID="4426ac7a-ace1-431f-a007-fff351fc07f8"
	Oct 17 19:14:48 functional-397448 kubelet[4146]: E1017 19:14:48.228187    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-67h2c" podUID="d0777f5d-4105-4c02-9b00-9efe368f46d3"
	Oct 17 19:14:56 functional-397448 kubelet[4146]: E1017 19:14:56.227608    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8698w" podUID="4426ac7a-ace1-431f-a007-fff351fc07f8"
	Oct 17 19:15:02 functional-397448 kubelet[4146]: E1017 19:15:02.227804    4146 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-67h2c" podUID="d0777f5d-4105-4c02-9b00-9efe368f46d3"
	
	
	==> kubernetes-dashboard [6def454f41a1ab86211a711e56509410de6ad16d33a840400876a5ff399c0c71] <==
	2025/10/17 19:05:30 Starting overwatch
	2025/10/17 19:05:30 Using namespace: kubernetes-dashboard
	2025/10/17 19:05:30 Using in-cluster config to connect to apiserver
	2025/10/17 19:05:30 Using secret token for csrf signing
	2025/10/17 19:05:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 19:05:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 19:05:30 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 19:05:30 Generating JWE encryption key
	2025/10/17 19:05:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 19:05:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 19:05:31 Initializing JWE encryption key from synchronized object
	2025/10/17 19:05:31 Creating in-cluster Sidecar client
	2025/10/17 19:05:31 Successful request to sidecar
	2025/10/17 19:05:31 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [0d9d152051786ae803c982003430d3e512c4de5c3e620aacd1b14d637f321362] <==
	I1017 19:03:49.514349       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-397448_67db0621-1bf8-4eda-86d4-92ed639cddd0!
	W1017 19:03:51.425373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:03:51.430889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:03:53.434904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:03:53.439483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:03:55.443542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:03:55.449937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:03:57.453186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:03:57.457393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:03:59.460782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:03:59.466040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:04:01.469892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:04:01.474652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:04:03.478676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:04:03.483591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:04:05.487461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:04:05.492366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:04:07.498831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:04:07.506093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:04:09.510254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:04:09.514640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:04:11.518281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:04:11.524274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:04:13.527761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:04:13.533566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [89737a83d3dd435630f95b6ea093f84d15915ea116ed30edef824dba4804255f] <==
	W1017 19:14:41.311490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:14:43.314593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:14:43.319325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:14:45.322888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:14:45.327183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:14:47.330608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:14:47.336306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:14:49.340255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:14:49.344811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:14:51.348821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:14:51.353205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:14:53.356845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:14:53.361283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:14:55.365570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:14:55.370172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:14:57.373529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:14:57.379462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:14:59.383763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:14:59.389041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:15:01.392490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:15:01.400285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:15:03.404164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:15:03.408489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:15:05.412254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:15:05.417137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-397448 -n functional-397448
helpers_test.go:269: (dbg) Run:  kubectl --context functional-397448 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-8698w hello-node-connect-7d85dfc575-67h2c
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-397448 describe pod busybox-mount hello-node-75c85bcc94-8698w hello-node-connect-7d85dfc575-67h2c
helpers_test.go:290: (dbg) kubectl --context functional-397448 describe pod busybox-mount hello-node-75c85bcc94-8698w hello-node-connect-7d85dfc575-67h2c:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-397448/192.168.49.2
	Start Time:       Fri, 17 Oct 2025 19:05:17 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://a5a4dd66bd9853c330253db69cfc993a822d53d424fb39d816cdc29d6e05f4b0
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 17 Oct 2025 19:05:18 +0000
	      Finished:     Fri, 17 Oct 2025 19:05:18 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tp95x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-tp95x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m49s  default-scheduler  Successfully assigned default/busybox-mount to functional-397448
	  Normal  Pulling    9m49s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m48s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 741ms (741ms including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m48s  kubelet            Created container: mount-munger
	  Normal  Started    9m48s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-8698w
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-397448/192.168.49.2
	Start Time:       Fri, 17 Oct 2025 19:04:53 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dbzkz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dbzkz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-8698w to functional-397448
	  Normal   Pulling    7m9s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m9s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m9s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    10s (x44 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     10s (x44 over 10m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-67h2c
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-397448/192.168.49.2
	Start Time:       Fri, 17 Oct 2025 19:05:03 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cgf89 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-cgf89:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-67h2c to functional-397448
	  Normal   Pulling    7m5s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m5s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m5s (x5 over 10m)      kubelet            Error: ErrImagePull
	  Normal   BackOff    4m48s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m48s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-397448 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-397448 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-8698w" [4426ac7a-ace1-431f-a007-fff351fc07f8] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-397448 -n functional-397448
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-17 19:14:53.874975072 +0000 UTC m=+1120.990524846
functional_test.go:1460: (dbg) Run:  kubectl --context functional-397448 describe po hello-node-75c85bcc94-8698w -n default
functional_test.go:1460: (dbg) kubectl --context functional-397448 describe po hello-node-75c85bcc94-8698w -n default:
Name:             hello-node-75c85bcc94-8698w
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-397448/192.168.49.2
Start Time:       Fri, 17 Oct 2025 19:04:53 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dbzkz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-dbzkz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-8698w to functional-397448
Normal   Pulling    6m56s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m56s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m56s (x5 over 10m)     kubelet            Error: ErrImagePull
Normal   BackOff    4m50s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m50s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-397448 logs hello-node-75c85bcc94-8698w -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-397448 logs hello-node-75c85bcc94-8698w -n default: exit status 1 (74.809488ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-8698w" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-397448 logs hello-node-75c85bcc94-8698w -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 image load --daemon kicbase/echo-server:functional-397448 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-397448" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 image load --daemon kicbase/echo-server:functional-397448 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-397448" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-397448
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 image load --daemon kicbase/echo-server:functional-397448 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-397448" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 image save kicbase/echo-server:functional-397448 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1017 19:05:01.568151  530481 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:05:01.568501  530481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:05:01.568512  530481 out.go:374] Setting ErrFile to fd 2...
	I1017 19:05:01.568516  530481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:05:01.568753  530481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:05:01.569441  530481 config.go:182] Loaded profile config "functional-397448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:05:01.569548  530481 config.go:182] Loaded profile config "functional-397448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:05:01.570001  530481 cli_runner.go:164] Run: docker container inspect functional-397448 --format={{.State.Status}}
	I1017 19:05:01.591534  530481 ssh_runner.go:195] Run: systemctl --version
	I1017 19:05:01.591601  530481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-397448
	I1017 19:05:01.615946  530481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/functional-397448/id_rsa Username:docker}
	I1017 19:05:01.722415  530481 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1017 19:05:01.722495  530481 cache_images.go:254] Failed to load cached images for "functional-397448": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1017 19:05:01.722524  530481 cache_images.go:266] failed pushing to: functional-397448

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-397448
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 image save --daemon kicbase/echo-server:functional-397448 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-linux-amd64 -p functional-397448 image save --daemon kicbase/echo-server:functional-397448 --alsologtostderr: (1.810783659s)
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-397448
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-397448: exit status 1 (21.397864ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-397448

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-397448

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397448 service --namespace=default --https --url hello-node: exit status 115 (550.202829ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30764
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-397448 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397448 service hello-node --url --format={{.IP}}: exit status 115 (547.640376ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-397448 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397448 service hello-node --url: exit status 115 (547.061208ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30764
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-397448 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30764
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-581322 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-581322 --output=json --user=testUser: exit status 80 (1.620699086s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c8c0aff4-be95-4caa-9dd9-b7114648e644","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-581322 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"c28be1c3-42b4-43bf-861d-23ae0e733b79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-17T19:23:50Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"d9087afa-aaa7-47e9-92d4-cc2d6ee91d30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-581322 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.31s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-581322 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-581322 --output=json --user=testUser: exit status 80 (2.307193043s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f1ff18e9-93ee-4496-964c-9bdfcc98300b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-581322 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"0fc925f7-611a-4b13-b3a9-422cd09f3141","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-17T19:23:52Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"89cc6395-5529-47a0-a390-420da7875900","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-581322 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.31s)

                                                
                                    
x
+
TestPause/serial/Pause (5.55s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-022753 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-022753 --alsologtostderr -v=5: exit status 80 (1.808504047s)

                                                
                                                
-- stdout --
	* Pausing node pause-022753 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:39:40.474260  717372 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:39:40.474591  717372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:39:40.474602  717372 out.go:374] Setting ErrFile to fd 2...
	I1017 19:39:40.474609  717372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:39:40.474911  717372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:39:40.475210  717372 out.go:368] Setting JSON to false
	I1017 19:39:40.475281  717372 mustload.go:65] Loading cluster: pause-022753
	I1017 19:39:40.475788  717372 config.go:182] Loaded profile config "pause-022753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:39:40.476245  717372 cli_runner.go:164] Run: docker container inspect pause-022753 --format={{.State.Status}}
	I1017 19:39:40.495677  717372 host.go:66] Checking if "pause-022753" exists ...
	I1017 19:39:40.496061  717372 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:39:40.559156  717372 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-17 19:39:40.547826226 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:39:40.560075  717372 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-022753 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 19:39:40.562743  717372 out.go:179] * Pausing node pause-022753 ... 
	I1017 19:39:40.564131  717372 host.go:66] Checking if "pause-022753" exists ...
	I1017 19:39:40.564502  717372 ssh_runner.go:195] Run: systemctl --version
	I1017 19:39:40.564555  717372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-022753
	I1017 19:39:40.583116  717372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/pause-022753/id_rsa Username:docker}
	I1017 19:39:40.681995  717372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:39:40.695890  717372 pause.go:52] kubelet running: true
	I1017 19:39:40.695968  717372 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 19:39:40.827537  717372 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 19:39:40.827635  717372 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 19:39:40.908761  717372 cri.go:89] found id: "5ba37c1fa5f95bea1d59ac710f84739907945e2a197e61e47bf6d1476bc4ebeb"
	I1017 19:39:40.908785  717372 cri.go:89] found id: "cd36745e14f819f51f3a7ba2949b928f0863a2c547ad8c1c33f5e25cfdfefe41"
	I1017 19:39:40.908791  717372 cri.go:89] found id: "2116d855e664de0015c7b4e2404f3b0b9ef4055f7a661f3876be93bff370bf9a"
	I1017 19:39:40.908796  717372 cri.go:89] found id: "7e4b559e41fac9ceada6e300fd20518c0ea7b6817872e80f5d0c7e972c29c77f"
	I1017 19:39:40.908800  717372 cri.go:89] found id: "9aba35312d5276186cfba97e39f51e8ad13acf9f60a91db2337925d3104d8ac2"
	I1017 19:39:40.908804  717372 cri.go:89] found id: "947b66e7ea02a1cc68559e769194b29987eff2812abee9c7e28de62a892cd23c"
	I1017 19:39:40.908808  717372 cri.go:89] found id: "1bcdabcebd96eeb652ac709961258452dbc61310af706b256c1a8fde12bde65a"
	I1017 19:39:40.908811  717372 cri.go:89] found id: ""
	I1017 19:39:40.908863  717372 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:39:40.923785  717372 retry.go:31] will retry after 251.079406ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:39:40Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:39:41.175239  717372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:39:41.189411  717372 pause.go:52] kubelet running: false
	I1017 19:39:41.189478  717372 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 19:39:41.307256  717372 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 19:39:41.307350  717372 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 19:39:41.382835  717372 cri.go:89] found id: "5ba37c1fa5f95bea1d59ac710f84739907945e2a197e61e47bf6d1476bc4ebeb"
	I1017 19:39:41.382855  717372 cri.go:89] found id: "cd36745e14f819f51f3a7ba2949b928f0863a2c547ad8c1c33f5e25cfdfefe41"
	I1017 19:39:41.382859  717372 cri.go:89] found id: "2116d855e664de0015c7b4e2404f3b0b9ef4055f7a661f3876be93bff370bf9a"
	I1017 19:39:41.382864  717372 cri.go:89] found id: "7e4b559e41fac9ceada6e300fd20518c0ea7b6817872e80f5d0c7e972c29c77f"
	I1017 19:39:41.382868  717372 cri.go:89] found id: "9aba35312d5276186cfba97e39f51e8ad13acf9f60a91db2337925d3104d8ac2"
	I1017 19:39:41.382872  717372 cri.go:89] found id: "947b66e7ea02a1cc68559e769194b29987eff2812abee9c7e28de62a892cd23c"
	I1017 19:39:41.382876  717372 cri.go:89] found id: "1bcdabcebd96eeb652ac709961258452dbc61310af706b256c1a8fde12bde65a"
	I1017 19:39:41.382879  717372 cri.go:89] found id: ""
	I1017 19:39:41.382947  717372 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:39:41.395782  717372 retry.go:31] will retry after 560.502296ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:39:41Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:39:41.956610  717372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:39:41.978370  717372 pause.go:52] kubelet running: false
	I1017 19:39:41.978440  717372 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 19:39:42.098941  717372 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 19:39:42.099047  717372 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 19:39:42.199257  717372 cri.go:89] found id: "5ba37c1fa5f95bea1d59ac710f84739907945e2a197e61e47bf6d1476bc4ebeb"
	I1017 19:39:42.199297  717372 cri.go:89] found id: "cd36745e14f819f51f3a7ba2949b928f0863a2c547ad8c1c33f5e25cfdfefe41"
	I1017 19:39:42.199305  717372 cri.go:89] found id: "2116d855e664de0015c7b4e2404f3b0b9ef4055f7a661f3876be93bff370bf9a"
	I1017 19:39:42.199310  717372 cri.go:89] found id: "7e4b559e41fac9ceada6e300fd20518c0ea7b6817872e80f5d0c7e972c29c77f"
	I1017 19:39:42.199315  717372 cri.go:89] found id: "9aba35312d5276186cfba97e39f51e8ad13acf9f60a91db2337925d3104d8ac2"
	I1017 19:39:42.199320  717372 cri.go:89] found id: "947b66e7ea02a1cc68559e769194b29987eff2812abee9c7e28de62a892cd23c"
	I1017 19:39:42.199324  717372 cri.go:89] found id: "1bcdabcebd96eeb652ac709961258452dbc61310af706b256c1a8fde12bde65a"
	I1017 19:39:42.199346  717372 cri.go:89] found id: ""
	I1017 19:39:42.199396  717372 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:39:42.217888  717372 out.go:203] 
	W1017 19:39:42.219050  717372 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:39:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:39:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:39:42.219078  717372 out.go:285] * 
	* 
	W1017 19:39:42.224271  717372 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:39:42.225514  717372 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-022753 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-022753
helpers_test.go:243: (dbg) docker inspect pause-022753:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "11937bfe02517680088b0f9a26255909075a745972037e88307b2b6c276c59f6",
	        "Created": "2025-10-17T19:39:00.970504452Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 708291,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:39:01.009183137Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/11937bfe02517680088b0f9a26255909075a745972037e88307b2b6c276c59f6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/11937bfe02517680088b0f9a26255909075a745972037e88307b2b6c276c59f6/hostname",
	        "HostsPath": "/var/lib/docker/containers/11937bfe02517680088b0f9a26255909075a745972037e88307b2b6c276c59f6/hosts",
	        "LogPath": "/var/lib/docker/containers/11937bfe02517680088b0f9a26255909075a745972037e88307b2b6c276c59f6/11937bfe02517680088b0f9a26255909075a745972037e88307b2b6c276c59f6-json.log",
	        "Name": "/pause-022753",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-022753:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-022753",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "11937bfe02517680088b0f9a26255909075a745972037e88307b2b6c276c59f6",
	                "LowerDir": "/var/lib/docker/overlay2/05502a390ebf6d1a5b5e128698fa2da994d44a0fc1e1732611a2fab346339925-init/diff:/var/lib/docker/overlay2/dbfb6a42e05d15debefb7c829b0dbabbe558b70da40f1ab4f30d27e0dda96088/diff",
	                "MergedDir": "/var/lib/docker/overlay2/05502a390ebf6d1a5b5e128698fa2da994d44a0fc1e1732611a2fab346339925/merged",
	                "UpperDir": "/var/lib/docker/overlay2/05502a390ebf6d1a5b5e128698fa2da994d44a0fc1e1732611a2fab346339925/diff",
	                "WorkDir": "/var/lib/docker/overlay2/05502a390ebf6d1a5b5e128698fa2da994d44a0fc1e1732611a2fab346339925/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-022753",
	                "Source": "/var/lib/docker/volumes/pause-022753/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-022753",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-022753",
	                "name.minikube.sigs.k8s.io": "pause-022753",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "10ae802159d16a3c2724f0aed53914099c342d006d1495bd54f439c041093b99",
	            "SandboxKey": "/var/run/docker/netns/10ae802159d1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-022753": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:3c:8c:d2:63:b4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "41156945d6028dfcdb7b06b80720f08c48f33d70d2795708ee7951bfed45ea37",
	                    "EndpointID": "bfe657eb5874f99ef487931eefc02dfb6bdd85e4f63fea0f328dc8a4fa665439",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-022753",
	                        "11937bfe0251"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-022753 -n pause-022753
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-022753 -n pause-022753: exit status 2 (354.343259ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-022753 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-022753 logs -n 25: (1.103264282s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-448344 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                                                                                         │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                                                        │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                        │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                         │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo docker system info                                                                                                                                                                                                      │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo containerd config dump                                                                                                                                                                                                  │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo crio config                                                                                                                                                                                                             │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ delete  │ -p cilium-448344                                                                                                                                                                                                                              │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:39 UTC │
	│ start   │ -p old-k8s-version-907112 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ start   │ -p pause-022753 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-022753           │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:39 UTC │
	│ pause   │ -p pause-022753 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-022753           │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:39:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:39:34.275295  715954 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:39:34.275556  715954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:39:34.275565  715954 out.go:374] Setting ErrFile to fd 2...
	I1017 19:39:34.275569  715954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:39:34.275894  715954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:39:34.276427  715954 out.go:368] Setting JSON to false
	I1017 19:39:34.277899  715954 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12113,"bootTime":1760717861,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:39:34.278006  715954 start.go:141] virtualization: kvm guest
	I1017 19:39:34.280044  715954 out.go:179] * [pause-022753] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:39:34.281463  715954 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:39:34.281452  715954 notify.go:220] Checking for updates...
	I1017 19:39:34.284400  715954 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:39:34.286317  715954 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:39:34.287920  715954 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 19:39:34.289279  715954 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:39:34.290999  715954 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:39:34.292986  715954 config.go:182] Loaded profile config "pause-022753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:39:34.293760  715954 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:39:34.320296  715954 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:39:34.320395  715954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:39:34.383842  715954 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-17 19:39:34.373107604 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:39:34.383979  715954 docker.go:318] overlay module found
	I1017 19:39:34.385582  715954 out.go:179] * Using the docker driver based on existing profile
	I1017 19:39:34.386728  715954 start.go:305] selected driver: docker
	I1017 19:39:34.386747  715954 start.go:925] validating driver "docker" against &{Name:pause-022753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-022753 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false reg
istry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:39:34.386861  715954 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:39:34.386934  715954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:39:34.446813  715954 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-17 19:39:34.435038475 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:39:34.447829  715954 cni.go:84] Creating CNI manager for ""
	I1017 19:39:34.447898  715954 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:39:34.447954  715954 start.go:349] cluster config:
	{Name:pause-022753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-022753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:fals
e storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:39:34.449783  715954 out.go:179] * Starting "pause-022753" primary control-plane node in "pause-022753" cluster
	I1017 19:39:34.450840  715954 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:39:34.451995  715954 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:39:34.453287  715954 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:39:34.453348  715954 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:39:34.453363  715954 cache.go:58] Caching tarball of preloaded images
	I1017 19:39:34.453392  715954 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:39:34.453465  715954 preload.go:233] Found /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:39:34.453479  715954 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:39:34.453664  715954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/config.json ...
	I1017 19:39:34.478657  715954 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:39:34.478679  715954 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:39:34.478712  715954 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:39:34.478746  715954 start.go:360] acquireMachinesLock for pause-022753: {Name:mk8b40d5617b96cfd8af53bdeb8c284959d5fecd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:39:34.478812  715954 start.go:364] duration metric: took 43.749µs to acquireMachinesLock for "pause-022753"
	I1017 19:39:34.478835  715954 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:39:34.478845  715954 fix.go:54] fixHost starting: 
	I1017 19:39:34.479062  715954 cli_runner.go:164] Run: docker container inspect pause-022753 --format={{.State.Status}}
	I1017 19:39:34.497891  715954 fix.go:112] recreateIfNeeded on pause-022753: state=Running err=<nil>
	W1017 19:39:34.497934  715954 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:39:33.958506  713511 out.go:252]   - Generating certificates and keys ...
	I1017 19:39:33.958598  713511 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 19:39:33.958724  713511 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 19:39:34.083825  713511 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 19:39:34.288110  713511 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 19:39:34.637206  713511 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 19:39:34.807984  713511 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 19:39:35.001627  713511 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 19:39:35.001836  713511 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-907112] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1017 19:39:35.195522  713511 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 19:39:35.195642  713511 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-907112] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1017 19:39:35.354269  713511 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 19:39:35.437216  713511 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 19:39:35.609895  713511 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 19:39:35.610027  713511 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 19:39:35.667589  713511 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 19:39:35.912938  713511 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 19:39:36.042035  713511 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 19:39:36.204161  713511 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 19:39:36.204930  713511 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 19:39:36.208782  713511 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 19:39:34.499767  715954 out.go:252] * Updating the running docker "pause-022753" container ...
	I1017 19:39:34.499813  715954 machine.go:93] provisionDockerMachine start ...
	I1017 19:39:34.499908  715954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-022753
	I1017 19:39:34.519534  715954 main.go:141] libmachine: Using SSH client type: native
	I1017 19:39:34.519803  715954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1017 19:39:34.519819  715954 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:39:34.654984  715954 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-022753
	
	I1017 19:39:34.655013  715954 ubuntu.go:182] provisioning hostname "pause-022753"
	I1017 19:39:34.655107  715954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-022753
	I1017 19:39:34.675935  715954 main.go:141] libmachine: Using SSH client type: native
	I1017 19:39:34.676289  715954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1017 19:39:34.676307  715954 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-022753 && echo "pause-022753" | sudo tee /etc/hostname
	I1017 19:39:34.825660  715954 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-022753
	
	I1017 19:39:34.825757  715954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-022753
	I1017 19:39:34.844629  715954 main.go:141] libmachine: Using SSH client type: native
	I1017 19:39:34.844991  715954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1017 19:39:34.845019  715954 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-022753' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-022753/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-022753' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:39:34.982309  715954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:39:34.982351  715954 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-492109/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-492109/.minikube}
	I1017 19:39:34.982377  715954 ubuntu.go:190] setting up certificates
	I1017 19:39:34.982386  715954 provision.go:84] configureAuth start
	I1017 19:39:34.982438  715954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-022753
	I1017 19:39:35.000748  715954 provision.go:143] copyHostCerts
	I1017 19:39:35.000820  715954 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem, removing ...
	I1017 19:39:35.000839  715954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem
	I1017 19:39:35.000923  715954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem (1078 bytes)
	I1017 19:39:35.001064  715954 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem, removing ...
	I1017 19:39:35.001082  715954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem
	I1017 19:39:35.001141  715954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem (1123 bytes)
	I1017 19:39:35.001252  715954 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem, removing ...
	I1017 19:39:35.001266  715954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem
	I1017 19:39:35.001307  715954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem (1679 bytes)
	I1017 19:39:35.001414  715954 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem org=jenkins.pause-022753 san=[127.0.0.1 192.168.103.2 localhost minikube pause-022753]
	I1017 19:39:35.080542  715954 provision.go:177] copyRemoteCerts
	I1017 19:39:35.080602  715954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:39:35.080663  715954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-022753
	I1017 19:39:35.100662  715954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/pause-022753/id_rsa Username:docker}
	I1017 19:39:35.199265  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 19:39:35.217438  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1017 19:39:35.237160  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 19:39:35.256794  715954 provision.go:87] duration metric: took 274.39133ms to configureAuth
	I1017 19:39:35.256839  715954 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:39:35.257075  715954 config.go:182] Loaded profile config "pause-022753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:39:35.257183  715954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-022753
	I1017 19:39:35.278888  715954 main.go:141] libmachine: Using SSH client type: native
	I1017 19:39:35.279214  715954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1017 19:39:35.279236  715954 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:39:35.589383  715954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:39:35.589426  715954 machine.go:96] duration metric: took 1.08960191s to provisionDockerMachine
	I1017 19:39:35.589443  715954 start.go:293] postStartSetup for "pause-022753" (driver="docker")
	I1017 19:39:35.589457  715954 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:39:35.589527  715954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:39:35.589592  715954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-022753
	I1017 19:39:35.608002  715954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/pause-022753/id_rsa Username:docker}
	I1017 19:39:35.707426  715954 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:39:35.711407  715954 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:39:35.711442  715954 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:39:35.711455  715954 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/addons for local assets ...
	I1017 19:39:35.711508  715954 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/files for local assets ...
	I1017 19:39:35.711578  715954 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem -> 4957252.pem in /etc/ssl/certs
	I1017 19:39:35.711691  715954 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:39:35.719913  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem --> /etc/ssl/certs/4957252.pem (1708 bytes)
	I1017 19:39:35.739890  715954 start.go:296] duration metric: took 150.428745ms for postStartSetup
	I1017 19:39:35.740008  715954 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:39:35.740065  715954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-022753
	I1017 19:39:35.758726  715954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/pause-022753/id_rsa Username:docker}
	I1017 19:39:35.855482  715954 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:39:35.860903  715954 fix.go:56] duration metric: took 1.382050656s for fixHost
	I1017 19:39:35.860937  715954 start.go:83] releasing machines lock for "pause-022753", held for 1.382110332s
	I1017 19:39:35.861018  715954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-022753
	I1017 19:39:35.881165  715954 ssh_runner.go:195] Run: cat /version.json
	I1017 19:39:35.881226  715954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-022753
	I1017 19:39:35.881266  715954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:39:35.881387  715954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-022753
	I1017 19:39:35.901375  715954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/pause-022753/id_rsa Username:docker}
	I1017 19:39:35.902469  715954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/pause-022753/id_rsa Username:docker}
	I1017 19:39:36.052752  715954 ssh_runner.go:195] Run: systemctl --version
	I1017 19:39:36.060676  715954 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:39:36.101547  715954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:39:36.106842  715954 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:39:36.106916  715954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:39:36.115717  715954 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:39:36.115746  715954 start.go:495] detecting cgroup driver to use...
	I1017 19:39:36.115782  715954 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 19:39:36.115828  715954 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:39:36.134429  715954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:39:36.149521  715954 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:39:36.149572  715954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:39:36.165879  715954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:39:36.180200  715954 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:39:36.307383  715954 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:39:36.426869  715954 docker.go:234] disabling docker service ...
	I1017 19:39:36.426947  715954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:39:36.442944  715954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:39:36.456781  715954 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:39:36.569774  715954 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:39:36.693365  715954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:39:36.707203  715954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:39:36.722395  715954 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:39:36.722449  715954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:39:36.731984  715954 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 19:39:36.732046  715954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:39:36.742213  715954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:39:36.752436  715954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:39:36.762174  715954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:39:36.771130  715954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:39:36.780941  715954 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:39:36.790984  715954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:39:36.800769  715954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:39:36.809661  715954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:39:36.817634  715954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:39:36.947661  715954 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:39:37.119019  715954 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:39:37.119107  715954 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:39:37.124752  715954 start.go:563] Will wait 60s for crictl version
	I1017 19:39:37.124839  715954 ssh_runner.go:195] Run: which crictl
	I1017 19:39:37.129981  715954 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:39:37.165430  715954 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:39:37.165499  715954 ssh_runner.go:195] Run: crio --version
	I1017 19:39:37.213542  715954 ssh_runner.go:195] Run: crio --version
	I1017 19:39:37.259665  715954 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:39:34.157198  696997 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.072112033s)
	W1017 19:39:34.157258  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1017 19:39:34.157271  696997 logs.go:123] Gathering logs for kube-apiserver [20a7dc2d4f69f3ae96cb5f77ce29674e9a5dcb9bd289dcf39f7969cd06df1890] ...
	I1017 19:39:34.157300  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 20a7dc2d4f69f3ae96cb5f77ce29674e9a5dcb9bd289dcf39f7969cd06df1890"
	I1017 19:39:34.196705  696997 logs.go:123] Gathering logs for kube-controller-manager [69a656275c0cf235f6825c755bfc80d9b521b23c16db8b98fdf5ca0a358b4571] ...
	I1017 19:39:34.196743  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69a656275c0cf235f6825c755bfc80d9b521b23c16db8b98fdf5ca0a358b4571"
	I1017 19:39:34.229471  696997 logs.go:123] Gathering logs for kube-controller-manager [c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36] ...
	I1017 19:39:34.229509  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36"
	W1017 19:39:34.260990  696997 logs.go:130] failed kube-controller-manager [c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:39:34.257882    1275 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36\": container with ID starting with c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36 not found: ID does not exist" containerID="c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36"
	time="2025-10-17T19:39:34Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36\": container with ID starting with c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1017 19:39:34.257882    1275 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36\": container with ID starting with c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36 not found: ID does not exist" containerID="c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36"
	time="2025-10-17T19:39:34Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36\": container with ID starting with c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36 not found: ID does not exist"
	
	** /stderr **
	I1017 19:39:34.261028  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:39:34.261043  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:39:36.801750  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:39:36.802177  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:39:36.802235  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:39:36.802280  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:39:36.833591  696997 cri.go:89] found id: "20a7dc2d4f69f3ae96cb5f77ce29674e9a5dcb9bd289dcf39f7969cd06df1890"
	I1017 19:39:36.833620  696997 cri.go:89] found id: ""
	I1017 19:39:36.833630  696997 logs.go:282] 1 containers: [20a7dc2d4f69f3ae96cb5f77ce29674e9a5dcb9bd289dcf39f7969cd06df1890]
	I1017 19:39:36.833720  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:39:36.837892  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:39:36.837960  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:39:36.875941  696997 cri.go:89] found id: ""
	I1017 19:39:36.875969  696997 logs.go:282] 0 containers: []
	W1017 19:39:36.875981  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:39:36.875988  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:39:36.876071  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:39:36.907346  696997 cri.go:89] found id: ""
	I1017 19:39:36.907374  696997 logs.go:282] 0 containers: []
	W1017 19:39:36.907383  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:39:36.907389  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:39:36.907450  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:39:36.940178  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:39:36.940203  696997 cri.go:89] found id: ""
	I1017 19:39:36.940213  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:39:36.940275  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:39:36.944702  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:39:36.944805  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:39:36.978373  696997 cri.go:89] found id: ""
	I1017 19:39:36.978406  696997 logs.go:282] 0 containers: []
	W1017 19:39:36.978418  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:39:36.978427  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:39:36.978494  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:39:37.008773  696997 cri.go:89] found id: "69a656275c0cf235f6825c755bfc80d9b521b23c16db8b98fdf5ca0a358b4571"
	I1017 19:39:37.008800  696997 cri.go:89] found id: ""
	I1017 19:39:37.008812  696997 logs.go:282] 1 containers: [69a656275c0cf235f6825c755bfc80d9b521b23c16db8b98fdf5ca0a358b4571]
	I1017 19:39:37.008866  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:39:37.013125  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:39:37.013187  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:39:37.042597  696997 cri.go:89] found id: ""
	I1017 19:39:37.042628  696997 logs.go:282] 0 containers: []
	W1017 19:39:37.042640  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:39:37.042649  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:39:37.042741  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:39:37.079568  696997 cri.go:89] found id: ""
	I1017 19:39:37.079601  696997 logs.go:282] 0 containers: []
	W1017 19:39:37.079613  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:39:37.079626  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:39:37.079643  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:39:37.101487  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:39:37.101521  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:39:37.185268  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:39:37.185296  696997 logs.go:123] Gathering logs for kube-apiserver [20a7dc2d4f69f3ae96cb5f77ce29674e9a5dcb9bd289dcf39f7969cd06df1890] ...
	I1017 19:39:37.185313  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 20a7dc2d4f69f3ae96cb5f77ce29674e9a5dcb9bd289dcf39f7969cd06df1890"
	I1017 19:39:37.234158  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:39:37.234200  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:39:37.301602  696997 logs.go:123] Gathering logs for kube-controller-manager [69a656275c0cf235f6825c755bfc80d9b521b23c16db8b98fdf5ca0a358b4571] ...
	I1017 19:39:37.301644  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69a656275c0cf235f6825c755bfc80d9b521b23c16db8b98fdf5ca0a358b4571"
	I1017 19:39:37.336741  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:39:37.336773  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:39:37.398362  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:39:37.398409  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:39:36.210446  713511 out.go:252]   - Booting up control plane ...
	I1017 19:39:36.210599  713511 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 19:39:36.210755  713511 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 19:39:36.212347  713511 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 19:39:36.235088  713511 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 19:39:36.236058  713511 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 19:39:36.236125  713511 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 19:39:36.340996  713511 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1017 19:39:37.260923  715954 cli_runner.go:164] Run: docker network inspect pause-022753 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:39:37.284506  715954 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1017 19:39:37.290700  715954 kubeadm.go:883] updating cluster {Name:pause-022753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-022753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regis
try-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:39:37.290893  715954 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:39:37.290952  715954 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:39:37.332397  715954 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:39:37.332445  715954 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:39:37.332515  715954 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:39:37.370329  715954 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:39:37.370357  715954 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:39:37.370366  715954 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1017 19:39:37.370516  715954 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-022753 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-022753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:39:37.370599  715954 ssh_runner.go:195] Run: crio config
	I1017 19:39:37.433195  715954 cni.go:84] Creating CNI manager for ""
	I1017 19:39:37.433229  715954 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:39:37.433251  715954 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:39:37.433282  715954 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-022753 NodeName:pause-022753 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:39:37.433486  715954 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-022753"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:39:37.433572  715954 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:39:37.445676  715954 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:39:37.445860  715954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 19:39:37.457584  715954 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 19:39:37.472860  715954 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:39:37.493326  715954 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1017 19:39:37.508635  715954 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1017 19:39:37.513866  715954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:39:37.643118  715954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:39:37.660702  715954 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753 for IP: 192.168.103.2
	I1017 19:39:37.660729  715954 certs.go:195] generating shared ca certs ...
	I1017 19:39:37.660751  715954 certs.go:227] acquiring lock for ca certs: {Name:mkc97483d62151ba5c32d923dd19e3e2b3661468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:39:37.660912  715954 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key
	I1017 19:39:37.660957  715954 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key
	I1017 19:39:37.660966  715954 certs.go:257] generating profile certs ...
	I1017 19:39:37.661071  715954 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/client.key
	I1017 19:39:37.661149  715954 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/apiserver.key.f5259238
	I1017 19:39:37.661203  715954 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/proxy-client.key
	I1017 19:39:37.661346  715954 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725.pem (1338 bytes)
	W1017 19:39:37.661379  715954 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725_empty.pem, impossibly tiny 0 bytes
	I1017 19:39:37.661387  715954 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 19:39:37.661418  715954 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem (1078 bytes)
	I1017 19:39:37.661447  715954 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:39:37.661474  715954 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem (1679 bytes)
	I1017 19:39:37.661523  715954 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem (1708 bytes)
	I1017 19:39:37.662367  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:39:37.682985  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:39:37.704608  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:39:37.725882  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 19:39:37.747620  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1017 19:39:37.770030  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 19:39:37.792441  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:39:37.813135  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:39:37.836140  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem --> /usr/share/ca-certificates/4957252.pem (1708 bytes)
	I1017 19:39:37.858779  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:39:37.882961  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725.pem --> /usr/share/ca-certificates/495725.pem (1338 bytes)
	I1017 19:39:37.905394  715954 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:39:37.921278  715954 ssh_runner.go:195] Run: openssl version
	I1017 19:39:37.929445  715954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4957252.pem && ln -fs /usr/share/ca-certificates/4957252.pem /etc/ssl/certs/4957252.pem"
	I1017 19:39:37.941165  715954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4957252.pem
	I1017 19:39:37.946561  715954 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/4957252.pem
	I1017 19:39:37.946631  715954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4957252.pem
	I1017 19:39:37.989814  715954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4957252.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:39:38.000567  715954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:39:38.011357  715954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:39:38.016233  715954 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:39:38.016295  715954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:39:38.061256  715954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:39:38.072373  715954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/495725.pem && ln -fs /usr/share/ca-certificates/495725.pem /etc/ssl/certs/495725.pem"
	I1017 19:39:38.083211  715954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/495725.pem
	I1017 19:39:38.088016  715954 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/495725.pem
	I1017 19:39:38.088094  715954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/495725.pem
	I1017 19:39:38.138504  715954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/495725.pem /etc/ssl/certs/51391683.0"
	I1017 19:39:38.149196  715954 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:39:38.154139  715954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:39:38.197452  715954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:39:38.241325  715954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:39:38.290757  715954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:39:38.333262  715954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:39:38.376016  715954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:39:38.414156  715954 kubeadm.go:400] StartCluster: {Name:pause-022753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-022753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry
-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:39:38.414271  715954 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:39:38.414326  715954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:39:38.456772  715954 cri.go:89] found id: "5ba37c1fa5f95bea1d59ac710f84739907945e2a197e61e47bf6d1476bc4ebeb"
	I1017 19:39:38.456806  715954 cri.go:89] found id: "cd36745e14f819f51f3a7ba2949b928f0863a2c547ad8c1c33f5e25cfdfefe41"
	I1017 19:39:38.456811  715954 cri.go:89] found id: "2116d855e664de0015c7b4e2404f3b0b9ef4055f7a661f3876be93bff370bf9a"
	I1017 19:39:38.456814  715954 cri.go:89] found id: "7e4b559e41fac9ceada6e300fd20518c0ea7b6817872e80f5d0c7e972c29c77f"
	I1017 19:39:38.456817  715954 cri.go:89] found id: "9aba35312d5276186cfba97e39f51e8ad13acf9f60a91db2337925d3104d8ac2"
	I1017 19:39:38.456819  715954 cri.go:89] found id: "947b66e7ea02a1cc68559e769194b29987eff2812abee9c7e28de62a892cd23c"
	I1017 19:39:38.456821  715954 cri.go:89] found id: "1bcdabcebd96eeb652ac709961258452dbc61310af706b256c1a8fde12bde65a"
	I1017 19:39:38.456824  715954 cri.go:89] found id: ""
	I1017 19:39:38.456874  715954 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 19:39:38.469656  715954 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:39:38Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:39:38.469762  715954 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 19:39:38.478825  715954 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 19:39:38.478845  715954 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 19:39:38.478896  715954 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 19:39:38.487442  715954 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:39:38.488113  715954 kubeconfig.go:125] found "pause-022753" server: "https://192.168.103.2:8443"
	I1017 19:39:38.489036  715954 kapi.go:59] client config for pause-022753: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/client.key", CAFile:"/home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]
string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819bc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 19:39:38.489497  715954 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1017 19:39:38.489513  715954 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1017 19:39:38.489518  715954 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1017 19:39:38.489522  715954 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1017 19:39:38.489525  715954 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1017 19:39:38.489916  715954 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 19:39:38.498147  715954 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1017 19:39:38.498182  715954 kubeadm.go:601] duration metric: took 19.330493ms to restartPrimaryControlPlane
	I1017 19:39:38.498193  715954 kubeadm.go:402] duration metric: took 84.049607ms to StartCluster
	I1017 19:39:38.498227  715954 settings.go:142] acquiring lock: {Name:mkb8ebc6edbbb6915dd74086f502bcc2721555a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:39:38.498314  715954 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:39:38.499335  715954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/kubeconfig: {Name:mkc99c1a086f83f30612e2820a6063c20b9217b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:39:38.499587  715954 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:39:38.499657  715954 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 19:39:38.499928  715954 config.go:182] Loaded profile config "pause-022753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:39:38.501712  715954 out.go:179] * Verifying Kubernetes components...
	I1017 19:39:38.502453  715954 out.go:179] * Enabled addons: 
	I1017 19:39:38.503152  715954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:39:38.503768  715954 addons.go:514] duration metric: took 4.121628ms for enable addons: enabled=[]
	I1017 19:39:38.630802  715954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:39:38.650059  715954 node_ready.go:35] waiting up to 6m0s for node "pause-022753" to be "Ready" ...
	I1017 19:39:38.661130  715954 node_ready.go:49] node "pause-022753" is "Ready"
	I1017 19:39:38.661160  715954 node_ready.go:38] duration metric: took 11.055364ms for node "pause-022753" to be "Ready" ...
	I1017 19:39:38.661176  715954 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:39:38.661225  715954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:39:38.674952  715954 api_server.go:72] duration metric: took 175.329305ms to wait for apiserver process to appear ...
	I1017 19:39:38.674984  715954 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:39:38.675008  715954 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 19:39:38.680171  715954 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1017 19:39:38.681694  715954 api_server.go:141] control plane version: v1.34.1
	I1017 19:39:38.681728  715954 api_server.go:131] duration metric: took 6.734926ms to wait for apiserver health ...
	I1017 19:39:38.681739  715954 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:39:38.687560  715954 system_pods.go:59] 7 kube-system pods found
	I1017 19:39:38.687714  715954 system_pods.go:61] "coredns-66bc5c9577-58vbl" [00f23b6e-f269-461b-b2a4-fe6d6ba6c5b3] Running
	I1017 19:39:38.687751  715954 system_pods.go:61] "etcd-pause-022753" [80ccf168-fed0-48ee-a711-3a293e37fb97] Running
	I1017 19:39:38.687769  715954 system_pods.go:61] "kindnet-cxm7s" [ffa724f2-9fde-423c-834e-3713f5f2a57f] Running
	I1017 19:39:38.687785  715954 system_pods.go:61] "kube-apiserver-pause-022753" [6a0186d5-9c8a-4d71-9dae-c6362bda3ce4] Running
	I1017 19:39:38.687800  715954 system_pods.go:61] "kube-controller-manager-pause-022753" [874df238-097a-4f9a-97fd-495fb4d88349] Running
	I1017 19:39:38.687828  715954 system_pods.go:61] "kube-proxy-skgh2" [3590c80c-b67b-426f-a61e-1063cd30b23f] Running
	I1017 19:39:38.687863  715954 system_pods.go:61] "kube-scheduler-pause-022753" [5754cceb-b06f-4c71-86a8-feb8bba0400a] Running
	I1017 19:39:38.687882  715954 system_pods.go:74] duration metric: took 6.135595ms to wait for pod list to return data ...
	I1017 19:39:38.687904  715954 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:39:38.690826  715954 default_sa.go:45] found service account: "default"
	I1017 19:39:38.690877  715954 default_sa.go:55] duration metric: took 2.963378ms for default service account to be created ...
	I1017 19:39:38.690888  715954 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 19:39:38.695675  715954 system_pods.go:86] 7 kube-system pods found
	I1017 19:39:38.695719  715954 system_pods.go:89] "coredns-66bc5c9577-58vbl" [00f23b6e-f269-461b-b2a4-fe6d6ba6c5b3] Running
	I1017 19:39:38.695728  715954 system_pods.go:89] "etcd-pause-022753" [80ccf168-fed0-48ee-a711-3a293e37fb97] Running
	I1017 19:39:38.695733  715954 system_pods.go:89] "kindnet-cxm7s" [ffa724f2-9fde-423c-834e-3713f5f2a57f] Running
	I1017 19:39:38.695738  715954 system_pods.go:89] "kube-apiserver-pause-022753" [6a0186d5-9c8a-4d71-9dae-c6362bda3ce4] Running
	I1017 19:39:38.695744  715954 system_pods.go:89] "kube-controller-manager-pause-022753" [874df238-097a-4f9a-97fd-495fb4d88349] Running
	I1017 19:39:38.695749  715954 system_pods.go:89] "kube-proxy-skgh2" [3590c80c-b67b-426f-a61e-1063cd30b23f] Running
	I1017 19:39:38.695755  715954 system_pods.go:89] "kube-scheduler-pause-022753" [5754cceb-b06f-4c71-86a8-feb8bba0400a] Running
	I1017 19:39:38.695766  715954 system_pods.go:126] duration metric: took 4.870302ms to wait for k8s-apps to be running ...
	I1017 19:39:38.695776  715954 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:39:38.695836  715954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:39:38.716914  715954 system_svc.go:56] duration metric: took 21.124939ms WaitForService to wait for kubelet
	I1017 19:39:38.717018  715954 kubeadm.go:586] duration metric: took 217.400685ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:39:38.717059  715954 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:39:38.723933  715954 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 19:39:38.724120  715954 node_conditions.go:123] node cpu capacity is 8
	I1017 19:39:38.724147  715954 node_conditions.go:105] duration metric: took 7.055965ms to run NodePressure ...
	I1017 19:39:38.724163  715954 start.go:241] waiting for startup goroutines ...
	I1017 19:39:38.724171  715954 start.go:246] waiting for cluster config update ...
	I1017 19:39:38.724180  715954 start.go:255] writing updated cluster config ...
	I1017 19:39:38.724601  715954 ssh_runner.go:195] Run: rm -f paused
	I1017 19:39:38.730246  715954 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:39:38.731193  715954 kapi.go:59] client config for pause-022753: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/client.key", CAFile:"/home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]
string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819bc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 19:39:38.734948  715954 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-58vbl" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:38.740313  715954 pod_ready.go:94] pod "coredns-66bc5c9577-58vbl" is "Ready"
	I1017 19:39:38.740340  715954 pod_ready.go:86] duration metric: took 5.365382ms for pod "coredns-66bc5c9577-58vbl" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:38.742741  715954 pod_ready.go:83] waiting for pod "etcd-pause-022753" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:38.747141  715954 pod_ready.go:94] pod "etcd-pause-022753" is "Ready"
	I1017 19:39:38.747167  715954 pod_ready.go:86] duration metric: took 4.405584ms for pod "etcd-pause-022753" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:38.749534  715954 pod_ready.go:83] waiting for pod "kube-apiserver-pause-022753" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:38.753727  715954 pod_ready.go:94] pod "kube-apiserver-pause-022753" is "Ready"
	I1017 19:39:38.753750  715954 pod_ready.go:86] duration metric: took 4.194052ms for pod "kube-apiserver-pause-022753" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:38.755749  715954 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-022753" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:39.134659  715954 pod_ready.go:94] pod "kube-controller-manager-pause-022753" is "Ready"
	I1017 19:39:39.134704  715954 pod_ready.go:86] duration metric: took 378.934411ms for pod "kube-controller-manager-pause-022753" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:39.334735  715954 pod_ready.go:83] waiting for pod "kube-proxy-skgh2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:39.735193  715954 pod_ready.go:94] pod "kube-proxy-skgh2" is "Ready"
	I1017 19:39:39.735219  715954 pod_ready.go:86] duration metric: took 400.45918ms for pod "kube-proxy-skgh2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:39.935899  715954 pod_ready.go:83] waiting for pod "kube-scheduler-pause-022753" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:40.335512  715954 pod_ready.go:94] pod "kube-scheduler-pause-022753" is "Ready"
	I1017 19:39:40.335538  715954 pod_ready.go:86] duration metric: took 399.601806ms for pod "kube-scheduler-pause-022753" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:40.335552  715954 pod_ready.go:40] duration metric: took 1.605268487s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:39:40.395111  715954 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 19:39:40.396751  715954 out.go:179] * Done! kubectl is now configured to use "pause-022753" cluster and "default" namespace by default
	I1017 19:39:40.842968  713511 kubeadm.go:318] [apiclient] All control plane components are healthy after 4.502646 seconds
	I1017 19:39:40.843157  713511 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 19:39:40.858637  713511 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 19:39:41.381070  713511 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 19:39:41.381324  713511 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-907112 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 19:39:41.891417  713511 kubeadm.go:318] [bootstrap-token] Using token: qxqgah.x8ddopkk8ykbd5wk
	I1017 19:39:41.892746  713511 out.go:252]   - Configuring RBAC rules ...
	I1017 19:39:41.892931  713511 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 19:39:41.898485  713511 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 19:39:41.906895  713511 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 19:39:41.910256  713511 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 19:39:41.913463  713511 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 19:39:41.917356  713511 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 19:39:41.927853  713511 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 19:39:42.125853  713511 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 19:39:42.303855  713511 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 19:39:42.305662  713511 kubeadm.go:318] 
	I1017 19:39:42.305781  713511 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 19:39:42.305789  713511 kubeadm.go:318] 
	I1017 19:39:42.305886  713511 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 19:39:42.305892  713511 kubeadm.go:318] 
	I1017 19:39:42.305924  713511 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 19:39:42.305997  713511 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 19:39:42.306059  713511 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 19:39:42.306065  713511 kubeadm.go:318] 
	I1017 19:39:42.306132  713511 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 19:39:42.306138  713511 kubeadm.go:318] 
	I1017 19:39:42.306198  713511 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 19:39:42.306203  713511 kubeadm.go:318] 
	I1017 19:39:42.306272  713511 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 19:39:42.306371  713511 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 19:39:42.306459  713511 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 19:39:42.306465  713511 kubeadm.go:318] 
	I1017 19:39:42.306576  713511 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 19:39:42.306666  713511 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 19:39:42.306673  713511 kubeadm.go:318] 
	I1017 19:39:42.306787  713511 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token qxqgah.x8ddopkk8ykbd5wk \
	I1017 19:39:42.306904  713511 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ae4b222593b9932ac318f80ad834fe09d4c8ed481133016b5c410bf2757b648e \
	I1017 19:39:42.306929  713511 kubeadm.go:318] 	--control-plane 
	I1017 19:39:42.306934  713511 kubeadm.go:318] 
	I1017 19:39:42.307029  713511 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 19:39:42.307048  713511 kubeadm.go:318] 
	I1017 19:39:42.307140  713511 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token qxqgah.x8ddopkk8ykbd5wk \
	I1017 19:39:42.307254  713511 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ae4b222593b9932ac318f80ad834fe09d4c8ed481133016b5c410bf2757b648e 
	I1017 19:39:42.310461  713511 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1017 19:39:42.310601  713511 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 19:39:42.310638  713511 cni.go:84] Creating CNI manager for ""
	I1017 19:39:42.310648  713511 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:39:42.312162  713511 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 19:39:37.449424  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:39:37.449464  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:39:40.050794  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:39:40.051249  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:39:40.051315  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:39:40.051424  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:39:40.084704  696997 cri.go:89] found id: "20a7dc2d4f69f3ae96cb5f77ce29674e9a5dcb9bd289dcf39f7969cd06df1890"
	I1017 19:39:40.084733  696997 cri.go:89] found id: ""
	I1017 19:39:40.084744  696997 logs.go:282] 1 containers: [20a7dc2d4f69f3ae96cb5f77ce29674e9a5dcb9bd289dcf39f7969cd06df1890]
	I1017 19:39:40.084819  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:39:40.089705  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:39:40.089798  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:39:40.122877  696997 cri.go:89] found id: ""
	I1017 19:39:40.122909  696997 logs.go:282] 0 containers: []
	W1017 19:39:40.122920  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:39:40.122934  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:39:40.123004  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:39:40.156778  696997 cri.go:89] found id: ""
	I1017 19:39:40.156805  696997 logs.go:282] 0 containers: []
	W1017 19:39:40.156815  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:39:40.156823  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:39:40.156886  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:39:40.191177  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:39:40.191207  696997 cri.go:89] found id: ""
	I1017 19:39:40.191218  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:39:40.191282  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:39:40.196194  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:39:40.196277  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:39:40.230553  696997 cri.go:89] found id: ""
	I1017 19:39:40.230585  696997 logs.go:282] 0 containers: []
	W1017 19:39:40.230597  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:39:40.230605  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:39:40.230669  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:39:40.264715  696997 cri.go:89] found id: "69a656275c0cf235f6825c755bfc80d9b521b23c16db8b98fdf5ca0a358b4571"
	I1017 19:39:40.264740  696997 cri.go:89] found id: ""
	I1017 19:39:40.264748  696997 logs.go:282] 1 containers: [69a656275c0cf235f6825c755bfc80d9b521b23c16db8b98fdf5ca0a358b4571]
	I1017 19:39:40.264804  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:39:40.269556  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:39:40.269641  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:39:40.311062  696997 cri.go:89] found id: ""
	I1017 19:39:40.311242  696997 logs.go:282] 0 containers: []
	W1017 19:39:40.311258  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:39:40.311266  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:39:40.311348  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:39:40.349086  696997 cri.go:89] found id: ""
	I1017 19:39:40.349118  696997 logs.go:282] 0 containers: []
	W1017 19:39:40.349129  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:39:40.349142  696997 logs.go:123] Gathering logs for kube-controller-manager [69a656275c0cf235f6825c755bfc80d9b521b23c16db8b98fdf5ca0a358b4571] ...
	I1017 19:39:40.349159  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69a656275c0cf235f6825c755bfc80d9b521b23c16db8b98fdf5ca0a358b4571"
	I1017 19:39:40.385873  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:39:40.385906  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:39:40.438621  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:39:40.438660  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:39:40.474960  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:39:40.474987  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:39:40.562277  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:39:40.562317  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:39:40.580589  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:39:40.580623  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:39:40.641887  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:39:40.641915  696997 logs.go:123] Gathering logs for kube-apiserver [20a7dc2d4f69f3ae96cb5f77ce29674e9a5dcb9bd289dcf39f7969cd06df1890] ...
	I1017 19:39:40.641929  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 20a7dc2d4f69f3ae96cb5f77ce29674e9a5dcb9bd289dcf39f7969cd06df1890"
	I1017 19:39:40.676274  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:39:40.676315  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:39:42.313335  713511 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 19:39:42.318456  713511 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1017 19:39:42.318481  713511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 19:39:42.334143  713511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	
	
	==> CRI-O <==
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.044879993Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.045729087Z" level=info msg="Conmon does support the --sync option"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.045748414Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.045762564Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.046465956Z" level=info msg="Conmon does support the --sync option"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.046482037Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.051033043Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.051058164Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.051557757Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.051990795Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.052045615Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.058532312Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.111585896Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-58vbl Namespace:kube-system ID:b79856717505088643a9994f73a8b304e415632d0ef6934b3d52bc1e2cc9a861 UID:00f23b6e-f269-461b-b2a4-fe6d6ba6c5b3 NetNS:/var/run/netns/5f767ddf-ead3-45f7-8814-665ecf571a69 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00089cc38}] Aliases:map[]}"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.112040353Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-58vbl for CNI network kindnet (type=ptp)"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.11354792Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.113592643Z" level=info msg="Starting seccomp notifier watcher"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.113703309Z" level=info msg="Create NRI interface"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.113922171Z" level=info msg="built-in NRI default validator is disabled"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.113951443Z" level=info msg="runtime interface created"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.113968544Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.113976853Z" level=info msg="runtime interface starting up..."
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.113991407Z" level=info msg="starting plugins..."
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.114008955Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.114663321Z" level=info msg="No systemd watchdog enabled"
	Oct 17 19:39:37 pause-022753 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	5ba37c1fa5f95       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago      Running             coredns                   0                   b798567175050       coredns-66bc5c9577-58vbl               kube-system
	cd36745e14f81       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   22 seconds ago      Running             kindnet-cni               0                   c6b999bc6aa5f       kindnet-cxm7s                          kube-system
	2116d855e664d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   22 seconds ago      Running             kube-proxy                0                   ae5e090db564a       kube-proxy-skgh2                       kube-system
	7e4b559e41fac       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   32 seconds ago      Running             kube-apiserver            0                   09036383e12a6       kube-apiserver-pause-022753            kube-system
	9aba35312d527       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   32 seconds ago      Running             kube-scheduler            0                   c22c241656c64       kube-scheduler-pause-022753            kube-system
	947b66e7ea02a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   32 seconds ago      Running             kube-controller-manager   0                   7845bf3cd3be9       kube-controller-manager-pause-022753   kube-system
	1bcdabcebd96e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   32 seconds ago      Running             etcd                      0                   c618e1733eb87       etcd-pause-022753                      kube-system
	
	
	==> coredns [5ba37c1fa5f95bea1d59ac710f84739907945e2a197e61e47bf6d1476bc4ebeb] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45167 - 50446 "HINFO IN 5867497680200239394.7504510633316126314. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.066189214s
	
	
	==> describe nodes <==
	Name:               pause-022753
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-022753
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=pause-022753
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_39_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:39:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-022753
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:39:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:39:31 +0000   Fri, 17 Oct 2025 19:39:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:39:31 +0000   Fri, 17 Oct 2025 19:39:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:39:31 +0000   Fri, 17 Oct 2025 19:39:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:39:31 +0000   Fri, 17 Oct 2025 19:39:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-022753
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                eeebd0d7-b163-484e-9d73-72842433540f
	  Boot ID:                    c8616e78-d085-41cd-a329-f2bcfd9cfa15
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-58vbl                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-pause-022753                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-cxm7s                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-pause-022753             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-pause-022753    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-skgh2                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-pause-022753             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21s   kube-proxy       
	  Normal  Starting                 28s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s   kubelet          Node pause-022753 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s   kubelet          Node pause-022753 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s   kubelet          Node pause-022753 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s   node-controller  Node pause-022753 event: Registered Node pause-022753 in Controller
	  Normal  NodeReady                12s   kubelet          Node pause-022753 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 d1 49 91 03 c2 08 06
	[  +0.000804] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 a9 2b 44 da ae 08 06
	[Oct17 18:59] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022229] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023876] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.024898] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +2.047801] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +4.031525] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:00] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +16.382262] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +32.252567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	
	
	==> etcd [1bcdabcebd96eeb652ac709961258452dbc61310af706b256c1a8fde12bde65a] <==
	{"level":"warn","ts":"2025-10-17T19:39:12.038048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.044233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.051016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.059208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.066581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.073313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.080103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.094246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.101409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.107767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.113849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.120571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.127120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.150465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.158785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.164816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.229321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52034","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T19:39:27.226148Z","caller":"traceutil/trace.go:172","msg":"trace[1440733298] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"239.63091ms","start":"2025-10-17T19:39:26.986499Z","end":"2025-10-17T19:39:27.226130Z","steps":["trace[1440733298] 'process raft request'  (duration: 239.514668ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:39:27.437726Z","caller":"traceutil/trace.go:172","msg":"trace[516016304] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"201.914248ms","start":"2025-10-17T19:39:27.235765Z","end":"2025-10-17T19:39:27.437679Z","steps":["trace[516016304] 'process raft request'  (duration: 160.970313ms)","trace[516016304] 'compare'  (duration: 40.787808ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T19:39:27.820398Z","caller":"traceutil/trace.go:172","msg":"trace[1512509626] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"161.284186ms","start":"2025-10-17T19:39:27.659082Z","end":"2025-10-17T19:39:27.820366Z","steps":["trace[1512509626] 'process raft request'  (duration: 161.154687ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:39:28.179235Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"197.555726ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T19:39:28.179329Z","caller":"traceutil/trace.go:172","msg":"trace[707968486] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:382; }","duration":"197.701002ms","start":"2025-10-17T19:39:27.981611Z","end":"2025-10-17T19:39:28.179312Z","steps":["trace[707968486] 'agreement among raft nodes before linearized reading'  (duration: 54.333099ms)","trace[707968486] 'range keys from in-memory index tree'  (duration: 143.202219ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T19:39:28.180050Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.430748ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789398982225323 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-022753\" mod_revision:382 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-022753\" value_size:7621 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-022753\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-17T19:39:28.180153Z","caller":"traceutil/trace.go:172","msg":"trace[1661665688] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"350.600509ms","start":"2025-10-17T19:39:27.829536Z","end":"2025-10-17T19:39:28.180137Z","steps":["trace[1661665688] 'process raft request'  (duration: 206.467169ms)","trace[1661665688] 'compare'  (duration: 143.305173ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T19:39:28.180217Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-17T19:39:27.829517Z","time spent":"350.662933ms","remote":"127.0.0.1:51262","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7683,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-022753\" mod_revision:382 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-022753\" value_size:7621 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-022753\" > >"}
	
	
	==> kernel <==
	 19:39:43 up  3:22,  0 user,  load average: 4.81, 3.16, 1.87
	Linux pause-022753 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cd36745e14f819f51f3a7ba2949b928f0863a2c547ad8c1c33f5e25cfdfefe41] <==
	I1017 19:39:21.375450       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 19:39:21.467150       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1017 19:39:21.467348       1 main.go:148] setting mtu 1500 for CNI 
	I1017 19:39:21.467375       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 19:39:21.467408       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T19:39:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 19:39:21.667055       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 19:39:21.667101       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 19:39:21.667139       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:39:21.667951       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 19:39:21.967950       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:39:21.967989       1 metrics.go:72] Registering metrics
	I1017 19:39:21.968056       1 controller.go:711] "Syncing nftables rules"
	I1017 19:39:31.579809       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 19:39:31.579882       1 main.go:301] handling current node
	I1017 19:39:41.586761       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 19:39:41.586793       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7e4b559e41fac9ceada6e300fd20518c0ea7b6817872e80f5d0c7e972c29c77f] <==
	I1017 19:39:12.732785       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 19:39:12.732882       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1017 19:39:12.733867       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 19:39:12.738853       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:39:12.739130       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1017 19:39:12.747588       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:39:12.747862       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 19:39:12.928123       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 19:39:13.636561       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 19:39:13.640477       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 19:39:13.640500       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:39:14.229560       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 19:39:14.274846       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 19:39:14.345485       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 19:39:14.352103       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1017 19:39:14.353514       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:39:14.358322       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 19:39:14.672486       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 19:39:15.410824       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 19:39:15.422084       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 19:39:15.429755       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 19:39:19.975376       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 19:39:20.677254       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:39:20.682199       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:39:20.824490       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [947b66e7ea02a1cc68559e769194b29987eff2812abee9c7e28de62a892cd23c] <==
	I1017 19:39:19.670895       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 19:39:19.671077       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 19:39:19.672154       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 19:39:19.672196       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 19:39:19.672234       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 19:39:19.672381       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 19:39:19.672408       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 19:39:19.672441       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 19:39:19.672479       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 19:39:19.672517       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 19:39:19.672809       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1017 19:39:19.673584       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 19:39:19.675027       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 19:39:19.675746       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 19:39:19.676870       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 19:39:19.676935       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 19:39:19.676975       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 19:39:19.676982       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 19:39:19.676987       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 19:39:19.678026       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:39:19.685395       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-022753" podCIDRs=["10.244.0.0/24"]
	I1017 19:39:19.692308       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:39:19.707563       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 19:39:19.707742       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:39:34.624501       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2116d855e664de0015c7b4e2404f3b0b9ef4055f7a661f3876be93bff370bf9a] <==
	I1017 19:39:21.267812       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:39:21.355793       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:39:21.456276       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:39:21.456328       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1017 19:39:21.456468       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:39:21.476961       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:39:21.477035       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:39:21.482482       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:39:21.482930       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:39:21.482957       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:39:21.484012       1 config.go:200] "Starting service config controller"
	I1017 19:39:21.484028       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:39:21.484080       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:39:21.484152       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:39:21.484180       1 config.go:309] "Starting node config controller"
	I1017 19:39:21.484196       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:39:21.484205       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:39:21.484187       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:39:21.484216       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:39:21.584580       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 19:39:21.584716       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:39:21.584751       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [9aba35312d5276186cfba97e39f51e8ad13acf9f60a91db2337925d3104d8ac2] <==
	E1017 19:39:12.681704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 19:39:12.681736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 19:39:12.681749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 19:39:12.681796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:39:12.681870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:39:12.681860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:39:12.681903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 19:39:12.681996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:39:12.682050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 19:39:13.508501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:39:13.564057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:39:13.582808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 19:39:13.601533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 19:39:13.707198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1017 19:39:13.724398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:39:13.757623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 19:39:13.769104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:39:13.778612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:39:13.779672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:39:13.876072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:39:13.890677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:39:13.955005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 19:39:13.955921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 19:39:14.013266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1017 19:39:15.678569       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:39:16 pause-022753 kubelet[1294]: E1017 19:39:16.297021    1294 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-022753\" already exists" pod="kube-system/kube-apiserver-pause-022753"
	Oct 17 19:39:16 pause-022753 kubelet[1294]: I1017 19:39:16.317107    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-022753" podStartSLOduration=1.317082748 podStartE2EDuration="1.317082748s" podCreationTimestamp="2025-10-17 19:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:39:16.317078094 +0000 UTC m=+1.145671631" watchObservedRunningTime="2025-10-17 19:39:16.317082748 +0000 UTC m=+1.145676278"
	Oct 17 19:39:16 pause-022753 kubelet[1294]: I1017 19:39:16.331469    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-022753" podStartSLOduration=1.331445842 podStartE2EDuration="1.331445842s" podCreationTimestamp="2025-10-17 19:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:39:16.331322015 +0000 UTC m=+1.159915551" watchObservedRunningTime="2025-10-17 19:39:16.331445842 +0000 UTC m=+1.160039375"
	Oct 17 19:39:16 pause-022753 kubelet[1294]: I1017 19:39:16.354228    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-022753" podStartSLOduration=1.354205217 podStartE2EDuration="1.354205217s" podCreationTimestamp="2025-10-17 19:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:39:16.342247751 +0000 UTC m=+1.170841289" watchObservedRunningTime="2025-10-17 19:39:16.354205217 +0000 UTC m=+1.182798755"
	Oct 17 19:39:16 pause-022753 kubelet[1294]: I1017 19:39:16.369420    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-022753" podStartSLOduration=1.369391021 podStartE2EDuration="1.369391021s" podCreationTimestamp="2025-10-17 19:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:39:16.354909158 +0000 UTC m=+1.183502700" watchObservedRunningTime="2025-10-17 19:39:16.369391021 +0000 UTC m=+1.197984558"
	Oct 17 19:39:19 pause-022753 kubelet[1294]: I1017 19:39:19.699218    1294 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 17 19:39:19 pause-022753 kubelet[1294]: I1017 19:39:19.700530    1294 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 17 19:39:20 pause-022753 kubelet[1294]: I1017 19:39:20.888892    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3590c80c-b67b-426f-a61e-1063cd30b23f-kube-proxy\") pod \"kube-proxy-skgh2\" (UID: \"3590c80c-b67b-426f-a61e-1063cd30b23f\") " pod="kube-system/kube-proxy-skgh2"
	Oct 17 19:39:20 pause-022753 kubelet[1294]: I1017 19:39:20.888934    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffa724f2-9fde-423c-834e-3713f5f2a57f-xtables-lock\") pod \"kindnet-cxm7s\" (UID: \"ffa724f2-9fde-423c-834e-3713f5f2a57f\") " pod="kube-system/kindnet-cxm7s"
	Oct 17 19:39:20 pause-022753 kubelet[1294]: I1017 19:39:20.888950    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffa724f2-9fde-423c-834e-3713f5f2a57f-lib-modules\") pod \"kindnet-cxm7s\" (UID: \"ffa724f2-9fde-423c-834e-3713f5f2a57f\") " pod="kube-system/kindnet-cxm7s"
	Oct 17 19:39:20 pause-022753 kubelet[1294]: I1017 19:39:20.888973    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkpn5\" (UniqueName: \"kubernetes.io/projected/3590c80c-b67b-426f-a61e-1063cd30b23f-kube-api-access-kkpn5\") pod \"kube-proxy-skgh2\" (UID: \"3590c80c-b67b-426f-a61e-1063cd30b23f\") " pod="kube-system/kube-proxy-skgh2"
	Oct 17 19:39:20 pause-022753 kubelet[1294]: I1017 19:39:20.888991    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3590c80c-b67b-426f-a61e-1063cd30b23f-xtables-lock\") pod \"kube-proxy-skgh2\" (UID: \"3590c80c-b67b-426f-a61e-1063cd30b23f\") " pod="kube-system/kube-proxy-skgh2"
	Oct 17 19:39:20 pause-022753 kubelet[1294]: I1017 19:39:20.889012    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ffa724f2-9fde-423c-834e-3713f5f2a57f-cni-cfg\") pod \"kindnet-cxm7s\" (UID: \"ffa724f2-9fde-423c-834e-3713f5f2a57f\") " pod="kube-system/kindnet-cxm7s"
	Oct 17 19:39:20 pause-022753 kubelet[1294]: I1017 19:39:20.889045    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3590c80c-b67b-426f-a61e-1063cd30b23f-lib-modules\") pod \"kube-proxy-skgh2\" (UID: \"3590c80c-b67b-426f-a61e-1063cd30b23f\") " pod="kube-system/kube-proxy-skgh2"
	Oct 17 19:39:20 pause-022753 kubelet[1294]: I1017 19:39:20.889077    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdh9p\" (UniqueName: \"kubernetes.io/projected/ffa724f2-9fde-423c-834e-3713f5f2a57f-kube-api-access-kdh9p\") pod \"kindnet-cxm7s\" (UID: \"ffa724f2-9fde-423c-834e-3713f5f2a57f\") " pod="kube-system/kindnet-cxm7s"
	Oct 17 19:39:21 pause-022753 kubelet[1294]: I1017 19:39:21.315190    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-cxm7s" podStartSLOduration=1.315168257 podStartE2EDuration="1.315168257s" podCreationTimestamp="2025-10-17 19:39:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:39:21.315002574 +0000 UTC m=+6.143596132" watchObservedRunningTime="2025-10-17 19:39:21.315168257 +0000 UTC m=+6.143761801"
	Oct 17 19:39:21 pause-022753 kubelet[1294]: I1017 19:39:21.345807    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-skgh2" podStartSLOduration=1.3457806780000001 podStartE2EDuration="1.345780678s" podCreationTimestamp="2025-10-17 19:39:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:39:21.345724631 +0000 UTC m=+6.174318168" watchObservedRunningTime="2025-10-17 19:39:21.345780678 +0000 UTC m=+6.174374291"
	Oct 17 19:39:31 pause-022753 kubelet[1294]: I1017 19:39:31.781328    1294 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 17 19:39:31 pause-022753 kubelet[1294]: I1017 19:39:31.867798    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc5w8\" (UniqueName: \"kubernetes.io/projected/00f23b6e-f269-461b-b2a4-fe6d6ba6c5b3-kube-api-access-lc5w8\") pod \"coredns-66bc5c9577-58vbl\" (UID: \"00f23b6e-f269-461b-b2a4-fe6d6ba6c5b3\") " pod="kube-system/coredns-66bc5c9577-58vbl"
	Oct 17 19:39:31 pause-022753 kubelet[1294]: I1017 19:39:31.867843    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00f23b6e-f269-461b-b2a4-fe6d6ba6c5b3-config-volume\") pod \"coredns-66bc5c9577-58vbl\" (UID: \"00f23b6e-f269-461b-b2a4-fe6d6ba6c5b3\") " pod="kube-system/coredns-66bc5c9577-58vbl"
	Oct 17 19:39:32 pause-022753 kubelet[1294]: I1017 19:39:32.343125    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-58vbl" podStartSLOduration=12.343100858 podStartE2EDuration="12.343100858s" podCreationTimestamp="2025-10-17 19:39:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:39:32.343013501 +0000 UTC m=+17.171607039" watchObservedRunningTime="2025-10-17 19:39:32.343100858 +0000 UTC m=+17.171694392"
	Oct 17 19:39:40 pause-022753 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 19:39:40 pause-022753 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 19:39:40 pause-022753 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 17 19:39:40 pause-022753 systemd[1]: kubelet.service: Consumed 1.290s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-022753 -n pause-022753
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-022753 -n pause-022753: exit status 2 (339.255466ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-022753 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-022753
helpers_test.go:243: (dbg) docker inspect pause-022753:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "11937bfe02517680088b0f9a26255909075a745972037e88307b2b6c276c59f6",
	        "Created": "2025-10-17T19:39:00.970504452Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 708291,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:39:01.009183137Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/11937bfe02517680088b0f9a26255909075a745972037e88307b2b6c276c59f6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/11937bfe02517680088b0f9a26255909075a745972037e88307b2b6c276c59f6/hostname",
	        "HostsPath": "/var/lib/docker/containers/11937bfe02517680088b0f9a26255909075a745972037e88307b2b6c276c59f6/hosts",
	        "LogPath": "/var/lib/docker/containers/11937bfe02517680088b0f9a26255909075a745972037e88307b2b6c276c59f6/11937bfe02517680088b0f9a26255909075a745972037e88307b2b6c276c59f6-json.log",
	        "Name": "/pause-022753",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-022753:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-022753",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "11937bfe02517680088b0f9a26255909075a745972037e88307b2b6c276c59f6",
	                "LowerDir": "/var/lib/docker/overlay2/05502a390ebf6d1a5b5e128698fa2da994d44a0fc1e1732611a2fab346339925-init/diff:/var/lib/docker/overlay2/dbfb6a42e05d15debefb7c829b0dbabbe558b70da40f1ab4f30d27e0dda96088/diff",
	                "MergedDir": "/var/lib/docker/overlay2/05502a390ebf6d1a5b5e128698fa2da994d44a0fc1e1732611a2fab346339925/merged",
	                "UpperDir": "/var/lib/docker/overlay2/05502a390ebf6d1a5b5e128698fa2da994d44a0fc1e1732611a2fab346339925/diff",
	                "WorkDir": "/var/lib/docker/overlay2/05502a390ebf6d1a5b5e128698fa2da994d44a0fc1e1732611a2fab346339925/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-022753",
	                "Source": "/var/lib/docker/volumes/pause-022753/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-022753",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-022753",
	                "name.minikube.sigs.k8s.io": "pause-022753",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "10ae802159d16a3c2724f0aed53914099c342d006d1495bd54f439c041093b99",
	            "SandboxKey": "/var/run/docker/netns/10ae802159d1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-022753": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:3c:8c:d2:63:b4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "41156945d6028dfcdb7b06b80720f08c48f33d70d2795708ee7951bfed45ea37",
	                    "EndpointID": "bfe657eb5874f99ef487931eefc02dfb6bdd85e4f63fea0f328dc8a4fa665439",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-022753",
	                        "11937bfe0251"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-022753 -n pause-022753
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-022753 -n pause-022753: exit status 2 (339.894795ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-022753 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-448344 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                                                                                         │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                                                        │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                        │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                         │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo docker system info                                                                                                                                                                                                      │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo containerd config dump                                                                                                                                                                                                  │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo crio config                                                                                                                                                                                                             │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ delete  │ -p cilium-448344                                                                                                                                                                                                                              │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:39 UTC │
	│ start   │ -p old-k8s-version-907112 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ start   │ -p pause-022753 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-022753           │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:39 UTC │
	│ pause   │ -p pause-022753 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-022753           │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:39:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:39:34.275295  715954 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:39:34.275556  715954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:39:34.275565  715954 out.go:374] Setting ErrFile to fd 2...
	I1017 19:39:34.275569  715954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:39:34.275894  715954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:39:34.276427  715954 out.go:368] Setting JSON to false
	I1017 19:39:34.277899  715954 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12113,"bootTime":1760717861,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:39:34.278006  715954 start.go:141] virtualization: kvm guest
	I1017 19:39:34.280044  715954 out.go:179] * [pause-022753] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:39:34.281463  715954 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:39:34.281452  715954 notify.go:220] Checking for updates...
	I1017 19:39:34.284400  715954 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:39:34.286317  715954 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:39:34.287920  715954 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 19:39:34.289279  715954 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:39:34.290999  715954 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:39:34.292986  715954 config.go:182] Loaded profile config "pause-022753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:39:34.293760  715954 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:39:34.320296  715954 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:39:34.320395  715954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:39:34.383842  715954 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-17 19:39:34.373107604 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:39:34.383979  715954 docker.go:318] overlay module found
	I1017 19:39:34.385582  715954 out.go:179] * Using the docker driver based on existing profile
	I1017 19:39:34.386728  715954 start.go:305] selected driver: docker
	I1017 19:39:34.386747  715954 start.go:925] validating driver "docker" against &{Name:pause-022753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-022753 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false reg
istry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:39:34.386861  715954 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:39:34.386934  715954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:39:34.446813  715954 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-17 19:39:34.435038475 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:39:34.447829  715954 cni.go:84] Creating CNI manager for ""
	I1017 19:39:34.447898  715954 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:39:34.447954  715954 start.go:349] cluster config:
	{Name:pause-022753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-022753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:fals
e storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:39:34.449783  715954 out.go:179] * Starting "pause-022753" primary control-plane node in "pause-022753" cluster
	I1017 19:39:34.450840  715954 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:39:34.451995  715954 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:39:34.453287  715954 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:39:34.453348  715954 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:39:34.453363  715954 cache.go:58] Caching tarball of preloaded images
	I1017 19:39:34.453392  715954 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:39:34.453465  715954 preload.go:233] Found /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:39:34.453479  715954 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:39:34.453664  715954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/config.json ...
	I1017 19:39:34.478657  715954 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:39:34.478679  715954 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:39:34.478712  715954 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:39:34.478746  715954 start.go:360] acquireMachinesLock for pause-022753: {Name:mk8b40d5617b96cfd8af53bdeb8c284959d5fecd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:39:34.478812  715954 start.go:364] duration metric: took 43.749µs to acquireMachinesLock for "pause-022753"
	I1017 19:39:34.478835  715954 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:39:34.478845  715954 fix.go:54] fixHost starting: 
	I1017 19:39:34.479062  715954 cli_runner.go:164] Run: docker container inspect pause-022753 --format={{.State.Status}}
	I1017 19:39:34.497891  715954 fix.go:112] recreateIfNeeded on pause-022753: state=Running err=<nil>
	W1017 19:39:34.497934  715954 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:39:33.958506  713511 out.go:252]   - Generating certificates and keys ...
	I1017 19:39:33.958598  713511 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 19:39:33.958724  713511 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 19:39:34.083825  713511 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 19:39:34.288110  713511 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 19:39:34.637206  713511 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 19:39:34.807984  713511 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 19:39:35.001627  713511 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 19:39:35.001836  713511 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-907112] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1017 19:39:35.195522  713511 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 19:39:35.195642  713511 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-907112] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1017 19:39:35.354269  713511 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 19:39:35.437216  713511 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 19:39:35.609895  713511 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 19:39:35.610027  713511 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 19:39:35.667589  713511 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 19:39:35.912938  713511 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 19:39:36.042035  713511 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 19:39:36.204161  713511 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 19:39:36.204930  713511 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 19:39:36.208782  713511 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 19:39:34.499767  715954 out.go:252] * Updating the running docker "pause-022753" container ...
	I1017 19:39:34.499813  715954 machine.go:93] provisionDockerMachine start ...
	I1017 19:39:34.499908  715954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-022753
	I1017 19:39:34.519534  715954 main.go:141] libmachine: Using SSH client type: native
	I1017 19:39:34.519803  715954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1017 19:39:34.519819  715954 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:39:34.654984  715954 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-022753
	
	I1017 19:39:34.655013  715954 ubuntu.go:182] provisioning hostname "pause-022753"
	I1017 19:39:34.655107  715954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-022753
	I1017 19:39:34.675935  715954 main.go:141] libmachine: Using SSH client type: native
	I1017 19:39:34.676289  715954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1017 19:39:34.676307  715954 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-022753 && echo "pause-022753" | sudo tee /etc/hostname
	I1017 19:39:34.825660  715954 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-022753
	
	I1017 19:39:34.825757  715954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-022753
	I1017 19:39:34.844629  715954 main.go:141] libmachine: Using SSH client type: native
	I1017 19:39:34.844991  715954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1017 19:39:34.845019  715954 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-022753' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-022753/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-022753' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:39:34.982309  715954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:39:34.982351  715954 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-492109/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-492109/.minikube}
	I1017 19:39:34.982377  715954 ubuntu.go:190] setting up certificates
	I1017 19:39:34.982386  715954 provision.go:84] configureAuth start
	I1017 19:39:34.982438  715954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-022753
	I1017 19:39:35.000748  715954 provision.go:143] copyHostCerts
	I1017 19:39:35.000820  715954 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem, removing ...
	I1017 19:39:35.000839  715954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem
	I1017 19:39:35.000923  715954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem (1078 bytes)
	I1017 19:39:35.001064  715954 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem, removing ...
	I1017 19:39:35.001082  715954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem
	I1017 19:39:35.001141  715954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem (1123 bytes)
	I1017 19:39:35.001252  715954 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem, removing ...
	I1017 19:39:35.001266  715954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem
	I1017 19:39:35.001307  715954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem (1679 bytes)
	I1017 19:39:35.001414  715954 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem org=jenkins.pause-022753 san=[127.0.0.1 192.168.103.2 localhost minikube pause-022753]
	I1017 19:39:35.080542  715954 provision.go:177] copyRemoteCerts
	I1017 19:39:35.080602  715954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:39:35.080663  715954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-022753
	I1017 19:39:35.100662  715954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/pause-022753/id_rsa Username:docker}
	I1017 19:39:35.199265  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 19:39:35.217438  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1017 19:39:35.237160  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 19:39:35.256794  715954 provision.go:87] duration metric: took 274.39133ms to configureAuth
	I1017 19:39:35.256839  715954 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:39:35.257075  715954 config.go:182] Loaded profile config "pause-022753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:39:35.257183  715954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-022753
	I1017 19:39:35.278888  715954 main.go:141] libmachine: Using SSH client type: native
	I1017 19:39:35.279214  715954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1017 19:39:35.279236  715954 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:39:35.589383  715954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:39:35.589426  715954 machine.go:96] duration metric: took 1.08960191s to provisionDockerMachine
	I1017 19:39:35.589443  715954 start.go:293] postStartSetup for "pause-022753" (driver="docker")
	I1017 19:39:35.589457  715954 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:39:35.589527  715954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:39:35.589592  715954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-022753
	I1017 19:39:35.608002  715954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/pause-022753/id_rsa Username:docker}
	I1017 19:39:35.707426  715954 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:39:35.711407  715954 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:39:35.711442  715954 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:39:35.711455  715954 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/addons for local assets ...
	I1017 19:39:35.711508  715954 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/files for local assets ...
	I1017 19:39:35.711578  715954 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem -> 4957252.pem in /etc/ssl/certs
	I1017 19:39:35.711691  715954 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:39:35.719913  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem --> /etc/ssl/certs/4957252.pem (1708 bytes)
	I1017 19:39:35.739890  715954 start.go:296] duration metric: took 150.428745ms for postStartSetup
	I1017 19:39:35.740008  715954 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:39:35.740065  715954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-022753
	I1017 19:39:35.758726  715954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/pause-022753/id_rsa Username:docker}
	I1017 19:39:35.855482  715954 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:39:35.860903  715954 fix.go:56] duration metric: took 1.382050656s for fixHost
	I1017 19:39:35.860937  715954 start.go:83] releasing machines lock for "pause-022753", held for 1.382110332s
	I1017 19:39:35.861018  715954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-022753
	I1017 19:39:35.881165  715954 ssh_runner.go:195] Run: cat /version.json
	I1017 19:39:35.881226  715954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-022753
	I1017 19:39:35.881266  715954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:39:35.881387  715954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-022753
	I1017 19:39:35.901375  715954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/pause-022753/id_rsa Username:docker}
	I1017 19:39:35.902469  715954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/pause-022753/id_rsa Username:docker}
	I1017 19:39:36.052752  715954 ssh_runner.go:195] Run: systemctl --version
	I1017 19:39:36.060676  715954 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:39:36.101547  715954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:39:36.106842  715954 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:39:36.106916  715954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:39:36.115717  715954 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:39:36.115746  715954 start.go:495] detecting cgroup driver to use...
	I1017 19:39:36.115782  715954 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 19:39:36.115828  715954 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:39:36.134429  715954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:39:36.149521  715954 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:39:36.149572  715954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:39:36.165879  715954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:39:36.180200  715954 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:39:36.307383  715954 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:39:36.426869  715954 docker.go:234] disabling docker service ...
	I1017 19:39:36.426947  715954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:39:36.442944  715954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:39:36.456781  715954 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:39:36.569774  715954 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:39:36.693365  715954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:39:36.707203  715954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:39:36.722395  715954 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:39:36.722449  715954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:39:36.731984  715954 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 19:39:36.732046  715954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:39:36.742213  715954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:39:36.752436  715954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:39:36.762174  715954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:39:36.771130  715954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:39:36.780941  715954 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:39:36.790984  715954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:39:36.800769  715954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:39:36.809661  715954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:39:36.817634  715954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:39:36.947661  715954 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:39:37.119019  715954 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:39:37.119107  715954 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:39:37.124752  715954 start.go:563] Will wait 60s for crictl version
	I1017 19:39:37.124839  715954 ssh_runner.go:195] Run: which crictl
	I1017 19:39:37.129981  715954 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:39:37.165430  715954 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:39:37.165499  715954 ssh_runner.go:195] Run: crio --version
	I1017 19:39:37.213542  715954 ssh_runner.go:195] Run: crio --version
	I1017 19:39:37.259665  715954 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:39:34.157198  696997 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.072112033s)
	W1017 19:39:34.157258  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1017 19:39:34.157271  696997 logs.go:123] Gathering logs for kube-apiserver [20a7dc2d4f69f3ae96cb5f77ce29674e9a5dcb9bd289dcf39f7969cd06df1890] ...
	I1017 19:39:34.157300  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 20a7dc2d4f69f3ae96cb5f77ce29674e9a5dcb9bd289dcf39f7969cd06df1890"
	I1017 19:39:34.196705  696997 logs.go:123] Gathering logs for kube-controller-manager [69a656275c0cf235f6825c755bfc80d9b521b23c16db8b98fdf5ca0a358b4571] ...
	I1017 19:39:34.196743  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69a656275c0cf235f6825c755bfc80d9b521b23c16db8b98fdf5ca0a358b4571"
	I1017 19:39:34.229471  696997 logs.go:123] Gathering logs for kube-controller-manager [c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36] ...
	I1017 19:39:34.229509  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36"
	W1017 19:39:34.260990  696997 logs.go:130] failed kube-controller-manager [c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:39:34.257882    1275 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36\": container with ID starting with c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36 not found: ID does not exist" containerID="c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36"
	time="2025-10-17T19:39:34Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36\": container with ID starting with c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1017 19:39:34.257882    1275 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36\": container with ID starting with c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36 not found: ID does not exist" containerID="c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36"
	time="2025-10-17T19:39:34Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36\": container with ID starting with c3cfdd38f00a25948e16f079187a935876da4dbcfaf6ad2f08c8c7198361ad36 not found: ID does not exist"
	
	** /stderr **
	I1017 19:39:34.261028  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:39:34.261043  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:39:36.801750  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:39:36.802177  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:39:36.802235  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:39:36.802280  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:39:36.833591  696997 cri.go:89] found id: "20a7dc2d4f69f3ae96cb5f77ce29674e9a5dcb9bd289dcf39f7969cd06df1890"
	I1017 19:39:36.833620  696997 cri.go:89] found id: ""
	I1017 19:39:36.833630  696997 logs.go:282] 1 containers: [20a7dc2d4f69f3ae96cb5f77ce29674e9a5dcb9bd289dcf39f7969cd06df1890]
	I1017 19:39:36.833720  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:39:36.837892  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:39:36.837960  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:39:36.875941  696997 cri.go:89] found id: ""
	I1017 19:39:36.875969  696997 logs.go:282] 0 containers: []
	W1017 19:39:36.875981  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:39:36.875988  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:39:36.876071  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:39:36.907346  696997 cri.go:89] found id: ""
	I1017 19:39:36.907374  696997 logs.go:282] 0 containers: []
	W1017 19:39:36.907383  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:39:36.907389  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:39:36.907450  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:39:36.940178  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:39:36.940203  696997 cri.go:89] found id: ""
	I1017 19:39:36.940213  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:39:36.940275  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:39:36.944702  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:39:36.944805  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:39:36.978373  696997 cri.go:89] found id: ""
	I1017 19:39:36.978406  696997 logs.go:282] 0 containers: []
	W1017 19:39:36.978418  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:39:36.978427  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:39:36.978494  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:39:37.008773  696997 cri.go:89] found id: "69a656275c0cf235f6825c755bfc80d9b521b23c16db8b98fdf5ca0a358b4571"
	I1017 19:39:37.008800  696997 cri.go:89] found id: ""
	I1017 19:39:37.008812  696997 logs.go:282] 1 containers: [69a656275c0cf235f6825c755bfc80d9b521b23c16db8b98fdf5ca0a358b4571]
	I1017 19:39:37.008866  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:39:37.013125  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:39:37.013187  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:39:37.042597  696997 cri.go:89] found id: ""
	I1017 19:39:37.042628  696997 logs.go:282] 0 containers: []
	W1017 19:39:37.042640  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:39:37.042649  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:39:37.042741  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:39:37.079568  696997 cri.go:89] found id: ""
	I1017 19:39:37.079601  696997 logs.go:282] 0 containers: []
	W1017 19:39:37.079613  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:39:37.079626  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:39:37.079643  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:39:37.101487  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:39:37.101521  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:39:37.185268  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:39:37.185296  696997 logs.go:123] Gathering logs for kube-apiserver [20a7dc2d4f69f3ae96cb5f77ce29674e9a5dcb9bd289dcf39f7969cd06df1890] ...
	I1017 19:39:37.185313  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 20a7dc2d4f69f3ae96cb5f77ce29674e9a5dcb9bd289dcf39f7969cd06df1890"
	I1017 19:39:37.234158  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:39:37.234200  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:39:37.301602  696997 logs.go:123] Gathering logs for kube-controller-manager [69a656275c0cf235f6825c755bfc80d9b521b23c16db8b98fdf5ca0a358b4571] ...
	I1017 19:39:37.301644  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69a656275c0cf235f6825c755bfc80d9b521b23c16db8b98fdf5ca0a358b4571"
	I1017 19:39:37.336741  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:39:37.336773  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:39:37.398362  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:39:37.398409  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:39:36.210446  713511 out.go:252]   - Booting up control plane ...
	I1017 19:39:36.210599  713511 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 19:39:36.210755  713511 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 19:39:36.212347  713511 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 19:39:36.235088  713511 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 19:39:36.236058  713511 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 19:39:36.236125  713511 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 19:39:36.340996  713511 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1017 19:39:37.260923  715954 cli_runner.go:164] Run: docker network inspect pause-022753 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:39:37.284506  715954 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1017 19:39:37.290700  715954 kubeadm.go:883] updating cluster {Name:pause-022753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-022753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regis
try-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:39:37.290893  715954 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:39:37.290952  715954 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:39:37.332397  715954 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:39:37.332445  715954 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:39:37.332515  715954 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:39:37.370329  715954 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:39:37.370357  715954 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:39:37.370366  715954 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1017 19:39:37.370516  715954 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-022753 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-022753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:39:37.370599  715954 ssh_runner.go:195] Run: crio config
	I1017 19:39:37.433195  715954 cni.go:84] Creating CNI manager for ""
	I1017 19:39:37.433229  715954 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:39:37.433251  715954 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:39:37.433282  715954 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-022753 NodeName:pause-022753 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:39:37.433486  715954 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-022753"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:39:37.433572  715954 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:39:37.445676  715954 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:39:37.445860  715954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 19:39:37.457584  715954 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 19:39:37.472860  715954 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:39:37.493326  715954 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1017 19:39:37.508635  715954 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1017 19:39:37.513866  715954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:39:37.643118  715954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:39:37.660702  715954 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753 for IP: 192.168.103.2
	I1017 19:39:37.660729  715954 certs.go:195] generating shared ca certs ...
	I1017 19:39:37.660751  715954 certs.go:227] acquiring lock for ca certs: {Name:mkc97483d62151ba5c32d923dd19e3e2b3661468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:39:37.660912  715954 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key
	I1017 19:39:37.660957  715954 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key
	I1017 19:39:37.660966  715954 certs.go:257] generating profile certs ...
	I1017 19:39:37.661071  715954 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/client.key
	I1017 19:39:37.661149  715954 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/apiserver.key.f5259238
	I1017 19:39:37.661203  715954 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/proxy-client.key
	I1017 19:39:37.661346  715954 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725.pem (1338 bytes)
	W1017 19:39:37.661379  715954 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725_empty.pem, impossibly tiny 0 bytes
	I1017 19:39:37.661387  715954 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 19:39:37.661418  715954 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem (1078 bytes)
	I1017 19:39:37.661447  715954 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:39:37.661474  715954 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem (1679 bytes)
	I1017 19:39:37.661523  715954 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem (1708 bytes)
	I1017 19:39:37.662367  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:39:37.682985  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:39:37.704608  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:39:37.725882  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 19:39:37.747620  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1017 19:39:37.770030  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 19:39:37.792441  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:39:37.813135  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:39:37.836140  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem --> /usr/share/ca-certificates/4957252.pem (1708 bytes)
	I1017 19:39:37.858779  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:39:37.882961  715954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725.pem --> /usr/share/ca-certificates/495725.pem (1338 bytes)
	I1017 19:39:37.905394  715954 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:39:37.921278  715954 ssh_runner.go:195] Run: openssl version
	I1017 19:39:37.929445  715954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4957252.pem && ln -fs /usr/share/ca-certificates/4957252.pem /etc/ssl/certs/4957252.pem"
	I1017 19:39:37.941165  715954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4957252.pem
	I1017 19:39:37.946561  715954 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/4957252.pem
	I1017 19:39:37.946631  715954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4957252.pem
	I1017 19:39:37.989814  715954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4957252.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:39:38.000567  715954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:39:38.011357  715954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:39:38.016233  715954 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:39:38.016295  715954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:39:38.061256  715954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:39:38.072373  715954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/495725.pem && ln -fs /usr/share/ca-certificates/495725.pem /etc/ssl/certs/495725.pem"
	I1017 19:39:38.083211  715954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/495725.pem
	I1017 19:39:38.088016  715954 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/495725.pem
	I1017 19:39:38.088094  715954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/495725.pem
	I1017 19:39:38.138504  715954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/495725.pem /etc/ssl/certs/51391683.0"
	I1017 19:39:38.149196  715954 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:39:38.154139  715954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:39:38.197452  715954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:39:38.241325  715954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:39:38.290757  715954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:39:38.333262  715954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:39:38.376016  715954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:39:38.414156  715954 kubeadm.go:400] StartCluster: {Name:pause-022753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-022753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry
-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:39:38.414271  715954 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:39:38.414326  715954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:39:38.456772  715954 cri.go:89] found id: "5ba37c1fa5f95bea1d59ac710f84739907945e2a197e61e47bf6d1476bc4ebeb"
	I1017 19:39:38.456806  715954 cri.go:89] found id: "cd36745e14f819f51f3a7ba2949b928f0863a2c547ad8c1c33f5e25cfdfefe41"
	I1017 19:39:38.456811  715954 cri.go:89] found id: "2116d855e664de0015c7b4e2404f3b0b9ef4055f7a661f3876be93bff370bf9a"
	I1017 19:39:38.456814  715954 cri.go:89] found id: "7e4b559e41fac9ceada6e300fd20518c0ea7b6817872e80f5d0c7e972c29c77f"
	I1017 19:39:38.456817  715954 cri.go:89] found id: "9aba35312d5276186cfba97e39f51e8ad13acf9f60a91db2337925d3104d8ac2"
	I1017 19:39:38.456819  715954 cri.go:89] found id: "947b66e7ea02a1cc68559e769194b29987eff2812abee9c7e28de62a892cd23c"
	I1017 19:39:38.456821  715954 cri.go:89] found id: "1bcdabcebd96eeb652ac709961258452dbc61310af706b256c1a8fde12bde65a"
	I1017 19:39:38.456824  715954 cri.go:89] found id: ""
	I1017 19:39:38.456874  715954 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 19:39:38.469656  715954 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:39:38Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:39:38.469762  715954 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 19:39:38.478825  715954 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 19:39:38.478845  715954 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 19:39:38.478896  715954 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 19:39:38.487442  715954 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:39:38.488113  715954 kubeconfig.go:125] found "pause-022753" server: "https://192.168.103.2:8443"
	I1017 19:39:38.489036  715954 kapi.go:59] client config for pause-022753: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/client.key", CAFile:"/home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]
string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819bc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 19:39:38.489497  715954 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1017 19:39:38.489513  715954 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1017 19:39:38.489518  715954 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1017 19:39:38.489522  715954 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1017 19:39:38.489525  715954 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1017 19:39:38.489916  715954 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 19:39:38.498147  715954 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1017 19:39:38.498182  715954 kubeadm.go:601] duration metric: took 19.330493ms to restartPrimaryControlPlane
	I1017 19:39:38.498193  715954 kubeadm.go:402] duration metric: took 84.049607ms to StartCluster
	I1017 19:39:38.498227  715954 settings.go:142] acquiring lock: {Name:mkb8ebc6edbbb6915dd74086f502bcc2721555a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:39:38.498314  715954 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:39:38.499335  715954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/kubeconfig: {Name:mkc99c1a086f83f30612e2820a6063c20b9217b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:39:38.499587  715954 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:39:38.499657  715954 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 19:39:38.499928  715954 config.go:182] Loaded profile config "pause-022753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:39:38.501712  715954 out.go:179] * Verifying Kubernetes components...
	I1017 19:39:38.502453  715954 out.go:179] * Enabled addons: 
	I1017 19:39:38.503152  715954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:39:38.503768  715954 addons.go:514] duration metric: took 4.121628ms for enable addons: enabled=[]
	I1017 19:39:38.630802  715954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:39:38.650059  715954 node_ready.go:35] waiting up to 6m0s for node "pause-022753" to be "Ready" ...
	I1017 19:39:38.661130  715954 node_ready.go:49] node "pause-022753" is "Ready"
	I1017 19:39:38.661160  715954 node_ready.go:38] duration metric: took 11.055364ms for node "pause-022753" to be "Ready" ...
	I1017 19:39:38.661176  715954 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:39:38.661225  715954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:39:38.674952  715954 api_server.go:72] duration metric: took 175.329305ms to wait for apiserver process to appear ...
	I1017 19:39:38.674984  715954 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:39:38.675008  715954 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 19:39:38.680171  715954 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1017 19:39:38.681694  715954 api_server.go:141] control plane version: v1.34.1
	I1017 19:39:38.681728  715954 api_server.go:131] duration metric: took 6.734926ms to wait for apiserver health ...
	I1017 19:39:38.681739  715954 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:39:38.687560  715954 system_pods.go:59] 7 kube-system pods found
	I1017 19:39:38.687714  715954 system_pods.go:61] "coredns-66bc5c9577-58vbl" [00f23b6e-f269-461b-b2a4-fe6d6ba6c5b3] Running
	I1017 19:39:38.687751  715954 system_pods.go:61] "etcd-pause-022753" [80ccf168-fed0-48ee-a711-3a293e37fb97] Running
	I1017 19:39:38.687769  715954 system_pods.go:61] "kindnet-cxm7s" [ffa724f2-9fde-423c-834e-3713f5f2a57f] Running
	I1017 19:39:38.687785  715954 system_pods.go:61] "kube-apiserver-pause-022753" [6a0186d5-9c8a-4d71-9dae-c6362bda3ce4] Running
	I1017 19:39:38.687800  715954 system_pods.go:61] "kube-controller-manager-pause-022753" [874df238-097a-4f9a-97fd-495fb4d88349] Running
	I1017 19:39:38.687828  715954 system_pods.go:61] "kube-proxy-skgh2" [3590c80c-b67b-426f-a61e-1063cd30b23f] Running
	I1017 19:39:38.687863  715954 system_pods.go:61] "kube-scheduler-pause-022753" [5754cceb-b06f-4c71-86a8-feb8bba0400a] Running
	I1017 19:39:38.687882  715954 system_pods.go:74] duration metric: took 6.135595ms to wait for pod list to return data ...
	I1017 19:39:38.687904  715954 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:39:38.690826  715954 default_sa.go:45] found service account: "default"
	I1017 19:39:38.690877  715954 default_sa.go:55] duration metric: took 2.963378ms for default service account to be created ...
	I1017 19:39:38.690888  715954 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 19:39:38.695675  715954 system_pods.go:86] 7 kube-system pods found
	I1017 19:39:38.695719  715954 system_pods.go:89] "coredns-66bc5c9577-58vbl" [00f23b6e-f269-461b-b2a4-fe6d6ba6c5b3] Running
	I1017 19:39:38.695728  715954 system_pods.go:89] "etcd-pause-022753" [80ccf168-fed0-48ee-a711-3a293e37fb97] Running
	I1017 19:39:38.695733  715954 system_pods.go:89] "kindnet-cxm7s" [ffa724f2-9fde-423c-834e-3713f5f2a57f] Running
	I1017 19:39:38.695738  715954 system_pods.go:89] "kube-apiserver-pause-022753" [6a0186d5-9c8a-4d71-9dae-c6362bda3ce4] Running
	I1017 19:39:38.695744  715954 system_pods.go:89] "kube-controller-manager-pause-022753" [874df238-097a-4f9a-97fd-495fb4d88349] Running
	I1017 19:39:38.695749  715954 system_pods.go:89] "kube-proxy-skgh2" [3590c80c-b67b-426f-a61e-1063cd30b23f] Running
	I1017 19:39:38.695755  715954 system_pods.go:89] "kube-scheduler-pause-022753" [5754cceb-b06f-4c71-86a8-feb8bba0400a] Running
	I1017 19:39:38.695766  715954 system_pods.go:126] duration metric: took 4.870302ms to wait for k8s-apps to be running ...
	I1017 19:39:38.695776  715954 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:39:38.695836  715954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:39:38.716914  715954 system_svc.go:56] duration metric: took 21.124939ms WaitForService to wait for kubelet
	I1017 19:39:38.717018  715954 kubeadm.go:586] duration metric: took 217.400685ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:39:38.717059  715954 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:39:38.723933  715954 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 19:39:38.724120  715954 node_conditions.go:123] node cpu capacity is 8
	I1017 19:39:38.724147  715954 node_conditions.go:105] duration metric: took 7.055965ms to run NodePressure ...
	I1017 19:39:38.724163  715954 start.go:241] waiting for startup goroutines ...
	I1017 19:39:38.724171  715954 start.go:246] waiting for cluster config update ...
	I1017 19:39:38.724180  715954 start.go:255] writing updated cluster config ...
	I1017 19:39:38.724601  715954 ssh_runner.go:195] Run: rm -f paused
	I1017 19:39:38.730246  715954 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:39:38.731193  715954 kapi.go:59] client config for pause-022753: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-492109/.minikube/profiles/pause-022753/client.key", CAFile:"/home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]
string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819bc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 19:39:38.734948  715954 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-58vbl" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:38.740313  715954 pod_ready.go:94] pod "coredns-66bc5c9577-58vbl" is "Ready"
	I1017 19:39:38.740340  715954 pod_ready.go:86] duration metric: took 5.365382ms for pod "coredns-66bc5c9577-58vbl" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:38.742741  715954 pod_ready.go:83] waiting for pod "etcd-pause-022753" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:38.747141  715954 pod_ready.go:94] pod "etcd-pause-022753" is "Ready"
	I1017 19:39:38.747167  715954 pod_ready.go:86] duration metric: took 4.405584ms for pod "etcd-pause-022753" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:38.749534  715954 pod_ready.go:83] waiting for pod "kube-apiserver-pause-022753" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:38.753727  715954 pod_ready.go:94] pod "kube-apiserver-pause-022753" is "Ready"
	I1017 19:39:38.753750  715954 pod_ready.go:86] duration metric: took 4.194052ms for pod "kube-apiserver-pause-022753" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:38.755749  715954 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-022753" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:39.134659  715954 pod_ready.go:94] pod "kube-controller-manager-pause-022753" is "Ready"
	I1017 19:39:39.134704  715954 pod_ready.go:86] duration metric: took 378.934411ms for pod "kube-controller-manager-pause-022753" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:39.334735  715954 pod_ready.go:83] waiting for pod "kube-proxy-skgh2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:39.735193  715954 pod_ready.go:94] pod "kube-proxy-skgh2" is "Ready"
	I1017 19:39:39.735219  715954 pod_ready.go:86] duration metric: took 400.45918ms for pod "kube-proxy-skgh2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:39.935899  715954 pod_ready.go:83] waiting for pod "kube-scheduler-pause-022753" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:40.335512  715954 pod_ready.go:94] pod "kube-scheduler-pause-022753" is "Ready"
	I1017 19:39:40.335538  715954 pod_ready.go:86] duration metric: took 399.601806ms for pod "kube-scheduler-pause-022753" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:39:40.335552  715954 pod_ready.go:40] duration metric: took 1.605268487s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:39:40.395111  715954 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 19:39:40.396751  715954 out.go:179] * Done! kubectl is now configured to use "pause-022753" cluster and "default" namespace by default
	I1017 19:39:40.842968  713511 kubeadm.go:318] [apiclient] All control plane components are healthy after 4.502646 seconds
	I1017 19:39:40.843157  713511 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 19:39:40.858637  713511 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 19:39:41.381070  713511 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 19:39:41.381324  713511 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-907112 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 19:39:41.891417  713511 kubeadm.go:318] [bootstrap-token] Using token: qxqgah.x8ddopkk8ykbd5wk
	I1017 19:39:41.892746  713511 out.go:252]   - Configuring RBAC rules ...
	I1017 19:39:41.892931  713511 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 19:39:41.898485  713511 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 19:39:41.906895  713511 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 19:39:41.910256  713511 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 19:39:41.913463  713511 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 19:39:41.917356  713511 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 19:39:41.927853  713511 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 19:39:42.125853  713511 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 19:39:42.303855  713511 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 19:39:42.305662  713511 kubeadm.go:318] 
	I1017 19:39:42.305781  713511 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 19:39:42.305789  713511 kubeadm.go:318] 
	I1017 19:39:42.305886  713511 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 19:39:42.305892  713511 kubeadm.go:318] 
	I1017 19:39:42.305924  713511 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 19:39:42.305997  713511 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 19:39:42.306059  713511 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 19:39:42.306065  713511 kubeadm.go:318] 
	I1017 19:39:42.306132  713511 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 19:39:42.306138  713511 kubeadm.go:318] 
	I1017 19:39:42.306198  713511 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 19:39:42.306203  713511 kubeadm.go:318] 
	I1017 19:39:42.306272  713511 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 19:39:42.306371  713511 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 19:39:42.306459  713511 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 19:39:42.306465  713511 kubeadm.go:318] 
	I1017 19:39:42.306576  713511 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 19:39:42.306666  713511 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 19:39:42.306673  713511 kubeadm.go:318] 
	I1017 19:39:42.306787  713511 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token qxqgah.x8ddopkk8ykbd5wk \
	I1017 19:39:42.306904  713511 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ae4b222593b9932ac318f80ad834fe09d4c8ed481133016b5c410bf2757b648e \
	I1017 19:39:42.306929  713511 kubeadm.go:318] 	--control-plane 
	I1017 19:39:42.306934  713511 kubeadm.go:318] 
	I1017 19:39:42.307029  713511 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 19:39:42.307048  713511 kubeadm.go:318] 
	I1017 19:39:42.307140  713511 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token qxqgah.x8ddopkk8ykbd5wk \
	I1017 19:39:42.307254  713511 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ae4b222593b9932ac318f80ad834fe09d4c8ed481133016b5c410bf2757b648e 
	I1017 19:39:42.310461  713511 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1017 19:39:42.310601  713511 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 19:39:42.310638  713511 cni.go:84] Creating CNI manager for ""
	I1017 19:39:42.310648  713511 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:39:42.312162  713511 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 19:39:37.449424  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:39:37.449464  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:39:40.050794  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:39:40.051249  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:39:40.051315  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:39:40.051424  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:39:40.084704  696997 cri.go:89] found id: "20a7dc2d4f69f3ae96cb5f77ce29674e9a5dcb9bd289dcf39f7969cd06df1890"
	I1017 19:39:40.084733  696997 cri.go:89] found id: ""
	I1017 19:39:40.084744  696997 logs.go:282] 1 containers: [20a7dc2d4f69f3ae96cb5f77ce29674e9a5dcb9bd289dcf39f7969cd06df1890]
	I1017 19:39:40.084819  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:39:40.089705  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:39:40.089798  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:39:40.122877  696997 cri.go:89] found id: ""
	I1017 19:39:40.122909  696997 logs.go:282] 0 containers: []
	W1017 19:39:40.122920  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:39:40.122934  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:39:40.123004  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:39:40.156778  696997 cri.go:89] found id: ""
	I1017 19:39:40.156805  696997 logs.go:282] 0 containers: []
	W1017 19:39:40.156815  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:39:40.156823  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:39:40.156886  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:39:40.191177  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:39:40.191207  696997 cri.go:89] found id: ""
	I1017 19:39:40.191218  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:39:40.191282  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:39:40.196194  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:39:40.196277  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:39:40.230553  696997 cri.go:89] found id: ""
	I1017 19:39:40.230585  696997 logs.go:282] 0 containers: []
	W1017 19:39:40.230597  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:39:40.230605  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:39:40.230669  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:39:40.264715  696997 cri.go:89] found id: "69a656275c0cf235f6825c755bfc80d9b521b23c16db8b98fdf5ca0a358b4571"
	I1017 19:39:40.264740  696997 cri.go:89] found id: ""
	I1017 19:39:40.264748  696997 logs.go:282] 1 containers: [69a656275c0cf235f6825c755bfc80d9b521b23c16db8b98fdf5ca0a358b4571]
	I1017 19:39:40.264804  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:39:40.269556  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:39:40.269641  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:39:40.311062  696997 cri.go:89] found id: ""
	I1017 19:39:40.311242  696997 logs.go:282] 0 containers: []
	W1017 19:39:40.311258  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:39:40.311266  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:39:40.311348  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:39:40.349086  696997 cri.go:89] found id: ""
	I1017 19:39:40.349118  696997 logs.go:282] 0 containers: []
	W1017 19:39:40.349129  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:39:40.349142  696997 logs.go:123] Gathering logs for kube-controller-manager [69a656275c0cf235f6825c755bfc80d9b521b23c16db8b98fdf5ca0a358b4571] ...
	I1017 19:39:40.349159  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69a656275c0cf235f6825c755bfc80d9b521b23c16db8b98fdf5ca0a358b4571"
	I1017 19:39:40.385873  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:39:40.385906  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:39:40.438621  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:39:40.438660  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:39:40.474960  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:39:40.474987  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:39:40.562277  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:39:40.562317  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:39:40.580589  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:39:40.580623  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:39:40.641887  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:39:40.641915  696997 logs.go:123] Gathering logs for kube-apiserver [20a7dc2d4f69f3ae96cb5f77ce29674e9a5dcb9bd289dcf39f7969cd06df1890] ...
	I1017 19:39:40.641929  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 20a7dc2d4f69f3ae96cb5f77ce29674e9a5dcb9bd289dcf39f7969cd06df1890"
	I1017 19:39:40.676274  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:39:40.676315  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:39:42.313335  713511 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 19:39:42.318456  713511 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1017 19:39:42.318481  713511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 19:39:42.334143  713511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	
	
	==> CRI-O <==
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.044879993Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.045729087Z" level=info msg="Conmon does support the --sync option"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.045748414Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.045762564Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.046465956Z" level=info msg="Conmon does support the --sync option"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.046482037Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.051033043Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.051058164Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.051557757Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.051990795Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.052045615Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.058532312Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.111585896Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-58vbl Namespace:kube-system ID:b79856717505088643a9994f73a8b304e415632d0ef6934b3d52bc1e2cc9a861 UID:00f23b6e-f269-461b-b2a4-fe6d6ba6c5b3 NetNS:/var/run/netns/5f767ddf-ead3-45f7-8814-665ecf571a69 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00089cc38}] Aliases:map[]}"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.112040353Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-58vbl for CNI network kindnet (type=ptp)"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.11354792Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.113592643Z" level=info msg="Starting seccomp notifier watcher"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.113703309Z" level=info msg="Create NRI interface"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.113922171Z" level=info msg="built-in NRI default validator is disabled"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.113951443Z" level=info msg="runtime interface created"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.113968544Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.113976853Z" level=info msg="runtime interface starting up..."
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.113991407Z" level=info msg="starting plugins..."
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.114008955Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 17 19:39:37 pause-022753 crio[2154]: time="2025-10-17T19:39:37.114663321Z" level=info msg="No systemd watchdog enabled"
	Oct 17 19:39:37 pause-022753 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	5ba37c1fa5f95       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   0                   b798567175050       coredns-66bc5c9577-58vbl               kube-system
	cd36745e14f81       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   23 seconds ago      Running             kindnet-cni               0                   c6b999bc6aa5f       kindnet-cxm7s                          kube-system
	2116d855e664d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   23 seconds ago      Running             kube-proxy                0                   ae5e090db564a       kube-proxy-skgh2                       kube-system
	7e4b559e41fac       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   34 seconds ago      Running             kube-apiserver            0                   09036383e12a6       kube-apiserver-pause-022753            kube-system
	9aba35312d527       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   34 seconds ago      Running             kube-scheduler            0                   c22c241656c64       kube-scheduler-pause-022753            kube-system
	947b66e7ea02a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   34 seconds ago      Running             kube-controller-manager   0                   7845bf3cd3be9       kube-controller-manager-pause-022753   kube-system
	1bcdabcebd96e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   34 seconds ago      Running             etcd                      0                   c618e1733eb87       etcd-pause-022753                      kube-system
	
	
	==> coredns [5ba37c1fa5f95bea1d59ac710f84739907945e2a197e61e47bf6d1476bc4ebeb] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45167 - 50446 "HINFO IN 5867497680200239394.7504510633316126314. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.066189214s
	
	
	==> describe nodes <==
	Name:               pause-022753
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-022753
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=pause-022753
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_39_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:39:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-022753
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:39:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:39:31 +0000   Fri, 17 Oct 2025 19:39:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:39:31 +0000   Fri, 17 Oct 2025 19:39:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:39:31 +0000   Fri, 17 Oct 2025 19:39:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:39:31 +0000   Fri, 17 Oct 2025 19:39:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-022753
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                eeebd0d7-b163-484e-9d73-72842433540f
	  Boot ID:                    c8616e78-d085-41cd-a329-f2bcfd9cfa15
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-58vbl                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-022753                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-cxm7s                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-pause-022753             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-pause-022753    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-skgh2                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-022753             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node pause-022753 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node pause-022753 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node pause-022753 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node pause-022753 event: Registered Node pause-022753 in Controller
	  Normal  NodeReady                14s   kubelet          Node pause-022753 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 d1 49 91 03 c2 08 06
	[  +0.000804] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 a9 2b 44 da ae 08 06
	[Oct17 18:59] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022229] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023876] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.024898] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +2.047801] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +4.031525] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:00] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +16.382262] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +32.252567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	
	
	==> etcd [1bcdabcebd96eeb652ac709961258452dbc61310af706b256c1a8fde12bde65a] <==
	{"level":"warn","ts":"2025-10-17T19:39:12.038048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.044233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.051016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.059208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.066581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.073313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.080103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.094246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.101409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.107767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.113849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.120571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.127120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.150465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.158785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.164816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:39:12.229321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52034","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T19:39:27.226148Z","caller":"traceutil/trace.go:172","msg":"trace[1440733298] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"239.63091ms","start":"2025-10-17T19:39:26.986499Z","end":"2025-10-17T19:39:27.226130Z","steps":["trace[1440733298] 'process raft request'  (duration: 239.514668ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:39:27.437726Z","caller":"traceutil/trace.go:172","msg":"trace[516016304] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"201.914248ms","start":"2025-10-17T19:39:27.235765Z","end":"2025-10-17T19:39:27.437679Z","steps":["trace[516016304] 'process raft request'  (duration: 160.970313ms)","trace[516016304] 'compare'  (duration: 40.787808ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T19:39:27.820398Z","caller":"traceutil/trace.go:172","msg":"trace[1512509626] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"161.284186ms","start":"2025-10-17T19:39:27.659082Z","end":"2025-10-17T19:39:27.820366Z","steps":["trace[1512509626] 'process raft request'  (duration: 161.154687ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:39:28.179235Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"197.555726ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T19:39:28.179329Z","caller":"traceutil/trace.go:172","msg":"trace[707968486] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:382; }","duration":"197.701002ms","start":"2025-10-17T19:39:27.981611Z","end":"2025-10-17T19:39:28.179312Z","steps":["trace[707968486] 'agreement among raft nodes before linearized reading'  (duration: 54.333099ms)","trace[707968486] 'range keys from in-memory index tree'  (duration: 143.202219ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T19:39:28.180050Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.430748ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789398982225323 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-022753\" mod_revision:382 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-022753\" value_size:7621 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-022753\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-17T19:39:28.180153Z","caller":"traceutil/trace.go:172","msg":"trace[1661665688] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"350.600509ms","start":"2025-10-17T19:39:27.829536Z","end":"2025-10-17T19:39:28.180137Z","steps":["trace[1661665688] 'process raft request'  (duration: 206.467169ms)","trace[1661665688] 'compare'  (duration: 143.305173ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T19:39:28.180217Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-17T19:39:27.829517Z","time spent":"350.662933ms","remote":"127.0.0.1:51262","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7683,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-022753\" mod_revision:382 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-022753\" value_size:7621 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-022753\" > >"}
	
	
	==> kernel <==
	 19:39:45 up  3:22,  0 user,  load average: 5.07, 3.24, 1.91
	Linux pause-022753 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cd36745e14f819f51f3a7ba2949b928f0863a2c547ad8c1c33f5e25cfdfefe41] <==
	I1017 19:39:21.375450       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 19:39:21.467150       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1017 19:39:21.467348       1 main.go:148] setting mtu 1500 for CNI 
	I1017 19:39:21.467375       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 19:39:21.467408       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T19:39:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 19:39:21.667055       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 19:39:21.667101       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 19:39:21.667139       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:39:21.667951       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 19:39:21.967950       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:39:21.967989       1 metrics.go:72] Registering metrics
	I1017 19:39:21.968056       1 controller.go:711] "Syncing nftables rules"
	I1017 19:39:31.579809       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 19:39:31.579882       1 main.go:301] handling current node
	I1017 19:39:41.586761       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 19:39:41.586793       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7e4b559e41fac9ceada6e300fd20518c0ea7b6817872e80f5d0c7e972c29c77f] <==
	I1017 19:39:12.732785       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 19:39:12.732882       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1017 19:39:12.733867       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 19:39:12.738853       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:39:12.739130       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1017 19:39:12.747588       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:39:12.747862       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 19:39:12.928123       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 19:39:13.636561       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 19:39:13.640477       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 19:39:13.640500       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:39:14.229560       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 19:39:14.274846       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 19:39:14.345485       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 19:39:14.352103       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1017 19:39:14.353514       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:39:14.358322       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 19:39:14.672486       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 19:39:15.410824       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 19:39:15.422084       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 19:39:15.429755       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 19:39:19.975376       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 19:39:20.677254       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:39:20.682199       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:39:20.824490       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [947b66e7ea02a1cc68559e769194b29987eff2812abee9c7e28de62a892cd23c] <==
	I1017 19:39:19.670895       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 19:39:19.671077       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 19:39:19.672154       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 19:39:19.672196       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 19:39:19.672234       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 19:39:19.672381       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 19:39:19.672408       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 19:39:19.672441       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 19:39:19.672479       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 19:39:19.672517       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 19:39:19.672809       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1017 19:39:19.673584       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 19:39:19.675027       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 19:39:19.675746       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 19:39:19.676870       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 19:39:19.676935       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 19:39:19.676975       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 19:39:19.676982       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 19:39:19.676987       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 19:39:19.678026       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:39:19.685395       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-022753" podCIDRs=["10.244.0.0/24"]
	I1017 19:39:19.692308       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:39:19.707563       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 19:39:19.707742       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:39:34.624501       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2116d855e664de0015c7b4e2404f3b0b9ef4055f7a661f3876be93bff370bf9a] <==
	I1017 19:39:21.267812       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:39:21.355793       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:39:21.456276       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:39:21.456328       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1017 19:39:21.456468       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:39:21.476961       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:39:21.477035       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:39:21.482482       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:39:21.482930       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:39:21.482957       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:39:21.484012       1 config.go:200] "Starting service config controller"
	I1017 19:39:21.484028       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:39:21.484080       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:39:21.484152       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:39:21.484180       1 config.go:309] "Starting node config controller"
	I1017 19:39:21.484196       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:39:21.484205       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:39:21.484187       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:39:21.484216       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:39:21.584580       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 19:39:21.584716       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:39:21.584751       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [9aba35312d5276186cfba97e39f51e8ad13acf9f60a91db2337925d3104d8ac2] <==
	E1017 19:39:12.681704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 19:39:12.681736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 19:39:12.681749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 19:39:12.681796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:39:12.681870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:39:12.681860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:39:12.681903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 19:39:12.681996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:39:12.682050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 19:39:13.508501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:39:13.564057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:39:13.582808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 19:39:13.601533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 19:39:13.707198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1017 19:39:13.724398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:39:13.757623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 19:39:13.769104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:39:13.778612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:39:13.779672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:39:13.876072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:39:13.890677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:39:13.955005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 19:39:13.955921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 19:39:14.013266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1017 19:39:15.678569       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:39:16 pause-022753 kubelet[1294]: E1017 19:39:16.297021    1294 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-022753\" already exists" pod="kube-system/kube-apiserver-pause-022753"
	Oct 17 19:39:16 pause-022753 kubelet[1294]: I1017 19:39:16.317107    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-022753" podStartSLOduration=1.317082748 podStartE2EDuration="1.317082748s" podCreationTimestamp="2025-10-17 19:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:39:16.317078094 +0000 UTC m=+1.145671631" watchObservedRunningTime="2025-10-17 19:39:16.317082748 +0000 UTC m=+1.145676278"
	Oct 17 19:39:16 pause-022753 kubelet[1294]: I1017 19:39:16.331469    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-022753" podStartSLOduration=1.331445842 podStartE2EDuration="1.331445842s" podCreationTimestamp="2025-10-17 19:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:39:16.331322015 +0000 UTC m=+1.159915551" watchObservedRunningTime="2025-10-17 19:39:16.331445842 +0000 UTC m=+1.160039375"
	Oct 17 19:39:16 pause-022753 kubelet[1294]: I1017 19:39:16.354228    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-022753" podStartSLOduration=1.354205217 podStartE2EDuration="1.354205217s" podCreationTimestamp="2025-10-17 19:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:39:16.342247751 +0000 UTC m=+1.170841289" watchObservedRunningTime="2025-10-17 19:39:16.354205217 +0000 UTC m=+1.182798755"
	Oct 17 19:39:16 pause-022753 kubelet[1294]: I1017 19:39:16.369420    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-022753" podStartSLOduration=1.369391021 podStartE2EDuration="1.369391021s" podCreationTimestamp="2025-10-17 19:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:39:16.354909158 +0000 UTC m=+1.183502700" watchObservedRunningTime="2025-10-17 19:39:16.369391021 +0000 UTC m=+1.197984558"
	Oct 17 19:39:19 pause-022753 kubelet[1294]: I1017 19:39:19.699218    1294 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 17 19:39:19 pause-022753 kubelet[1294]: I1017 19:39:19.700530    1294 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 17 19:39:20 pause-022753 kubelet[1294]: I1017 19:39:20.888892    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3590c80c-b67b-426f-a61e-1063cd30b23f-kube-proxy\") pod \"kube-proxy-skgh2\" (UID: \"3590c80c-b67b-426f-a61e-1063cd30b23f\") " pod="kube-system/kube-proxy-skgh2"
	Oct 17 19:39:20 pause-022753 kubelet[1294]: I1017 19:39:20.888934    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffa724f2-9fde-423c-834e-3713f5f2a57f-xtables-lock\") pod \"kindnet-cxm7s\" (UID: \"ffa724f2-9fde-423c-834e-3713f5f2a57f\") " pod="kube-system/kindnet-cxm7s"
	Oct 17 19:39:20 pause-022753 kubelet[1294]: I1017 19:39:20.888950    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffa724f2-9fde-423c-834e-3713f5f2a57f-lib-modules\") pod \"kindnet-cxm7s\" (UID: \"ffa724f2-9fde-423c-834e-3713f5f2a57f\") " pod="kube-system/kindnet-cxm7s"
	Oct 17 19:39:20 pause-022753 kubelet[1294]: I1017 19:39:20.888973    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkpn5\" (UniqueName: \"kubernetes.io/projected/3590c80c-b67b-426f-a61e-1063cd30b23f-kube-api-access-kkpn5\") pod \"kube-proxy-skgh2\" (UID: \"3590c80c-b67b-426f-a61e-1063cd30b23f\") " pod="kube-system/kube-proxy-skgh2"
	Oct 17 19:39:20 pause-022753 kubelet[1294]: I1017 19:39:20.888991    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3590c80c-b67b-426f-a61e-1063cd30b23f-xtables-lock\") pod \"kube-proxy-skgh2\" (UID: \"3590c80c-b67b-426f-a61e-1063cd30b23f\") " pod="kube-system/kube-proxy-skgh2"
	Oct 17 19:39:20 pause-022753 kubelet[1294]: I1017 19:39:20.889012    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ffa724f2-9fde-423c-834e-3713f5f2a57f-cni-cfg\") pod \"kindnet-cxm7s\" (UID: \"ffa724f2-9fde-423c-834e-3713f5f2a57f\") " pod="kube-system/kindnet-cxm7s"
	Oct 17 19:39:20 pause-022753 kubelet[1294]: I1017 19:39:20.889045    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3590c80c-b67b-426f-a61e-1063cd30b23f-lib-modules\") pod \"kube-proxy-skgh2\" (UID: \"3590c80c-b67b-426f-a61e-1063cd30b23f\") " pod="kube-system/kube-proxy-skgh2"
	Oct 17 19:39:20 pause-022753 kubelet[1294]: I1017 19:39:20.889077    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdh9p\" (UniqueName: \"kubernetes.io/projected/ffa724f2-9fde-423c-834e-3713f5f2a57f-kube-api-access-kdh9p\") pod \"kindnet-cxm7s\" (UID: \"ffa724f2-9fde-423c-834e-3713f5f2a57f\") " pod="kube-system/kindnet-cxm7s"
	Oct 17 19:39:21 pause-022753 kubelet[1294]: I1017 19:39:21.315190    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-cxm7s" podStartSLOduration=1.315168257 podStartE2EDuration="1.315168257s" podCreationTimestamp="2025-10-17 19:39:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:39:21.315002574 +0000 UTC m=+6.143596132" watchObservedRunningTime="2025-10-17 19:39:21.315168257 +0000 UTC m=+6.143761801"
	Oct 17 19:39:21 pause-022753 kubelet[1294]: I1017 19:39:21.345807    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-skgh2" podStartSLOduration=1.3457806780000001 podStartE2EDuration="1.345780678s" podCreationTimestamp="2025-10-17 19:39:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:39:21.345724631 +0000 UTC m=+6.174318168" watchObservedRunningTime="2025-10-17 19:39:21.345780678 +0000 UTC m=+6.174374291"
	Oct 17 19:39:31 pause-022753 kubelet[1294]: I1017 19:39:31.781328    1294 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 17 19:39:31 pause-022753 kubelet[1294]: I1017 19:39:31.867798    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc5w8\" (UniqueName: \"kubernetes.io/projected/00f23b6e-f269-461b-b2a4-fe6d6ba6c5b3-kube-api-access-lc5w8\") pod \"coredns-66bc5c9577-58vbl\" (UID: \"00f23b6e-f269-461b-b2a4-fe6d6ba6c5b3\") " pod="kube-system/coredns-66bc5c9577-58vbl"
	Oct 17 19:39:31 pause-022753 kubelet[1294]: I1017 19:39:31.867843    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00f23b6e-f269-461b-b2a4-fe6d6ba6c5b3-config-volume\") pod \"coredns-66bc5c9577-58vbl\" (UID: \"00f23b6e-f269-461b-b2a4-fe6d6ba6c5b3\") " pod="kube-system/coredns-66bc5c9577-58vbl"
	Oct 17 19:39:32 pause-022753 kubelet[1294]: I1017 19:39:32.343125    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-58vbl" podStartSLOduration=12.343100858 podStartE2EDuration="12.343100858s" podCreationTimestamp="2025-10-17 19:39:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:39:32.343013501 +0000 UTC m=+17.171607039" watchObservedRunningTime="2025-10-17 19:39:32.343100858 +0000 UTC m=+17.171694392"
	Oct 17 19:39:40 pause-022753 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 19:39:40 pause-022753 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 19:39:40 pause-022753 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 17 19:39:40 pause-022753 systemd[1]: kubelet.service: Consumed 1.290s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-022753 -n pause-022753
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-022753 -n pause-022753: exit status 2 (349.608746ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-022753 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-907112 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-907112 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (297.306503ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:40:19Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-907112 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-907112 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-907112 describe deploy/metrics-server -n kube-system: exit status 1 (73.547203ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-907112 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-907112
helpers_test.go:243: (dbg) docker inspect old-k8s-version-907112:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c9e45391db92dcb1aa794e584027e0ef5db54bd40162863d9ac544d6e17efe69",
	        "Created": "2025-10-17T19:39:28.47315274Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 714255,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:39:28.511428091Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/c9e45391db92dcb1aa794e584027e0ef5db54bd40162863d9ac544d6e17efe69/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c9e45391db92dcb1aa794e584027e0ef5db54bd40162863d9ac544d6e17efe69/hostname",
	        "HostsPath": "/var/lib/docker/containers/c9e45391db92dcb1aa794e584027e0ef5db54bd40162863d9ac544d6e17efe69/hosts",
	        "LogPath": "/var/lib/docker/containers/c9e45391db92dcb1aa794e584027e0ef5db54bd40162863d9ac544d6e17efe69/c9e45391db92dcb1aa794e584027e0ef5db54bd40162863d9ac544d6e17efe69-json.log",
	        "Name": "/old-k8s-version-907112",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-907112:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-907112",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c9e45391db92dcb1aa794e584027e0ef5db54bd40162863d9ac544d6e17efe69",
	                "LowerDir": "/var/lib/docker/overlay2/e6c30ea89b09c5e82cbf480acc38ef16124ed01036b47190b5890c66fdac61c3-init/diff:/var/lib/docker/overlay2/dbfb6a42e05d15debefb7c829b0dbabbe558b70da40f1ab4f30d27e0dda96088/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e6c30ea89b09c5e82cbf480acc38ef16124ed01036b47190b5890c66fdac61c3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e6c30ea89b09c5e82cbf480acc38ef16124ed01036b47190b5890c66fdac61c3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e6c30ea89b09c5e82cbf480acc38ef16124ed01036b47190b5890c66fdac61c3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-907112",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-907112/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-907112",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-907112",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-907112",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ce02c4f4ef5bf02e1d8011c5365f19e85b0a8a166105188079f93970a2c77ecc",
	            "SandboxKey": "/var/run/docker/netns/ce02c4f4ef5b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-907112": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:da:e2:fd:5e:c6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e97054581b64b00fcec9937bf013cc1657d289bfdedb4be6f078111f0c49299",
	                    "EndpointID": "137ca8ddc7d39e2544efb3378e26543e5c9f564fe9a5ab8378016e4c863506c0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-907112",
	                        "c9e45391db92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-907112 -n old-k8s-version-907112
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-907112 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-907112 logs -n 25: (1.195809114s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-448344 sudo docker system info                                                                                                                                                                                                      │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo containerd config dump                                                                                                                                                                                                  │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo crio config                                                                                                                                                                                                             │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ delete  │ -p cilium-448344                                                                                                                                                                                                                              │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:39 UTC │
	│ start   │ -p old-k8s-version-907112 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:40 UTC │
	│ start   │ -p pause-022753 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-022753           │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:39 UTC │
	│ pause   │ -p pause-022753 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-022753           │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ delete  │ -p pause-022753                                                                                                                                                                                                                               │ pause-022753           │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:39 UTC │
	│ start   │ -p no-preload-171807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-171807      │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ start   │ -p cert-expiration-141205 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-141205 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ delete  │ -p cert-expiration-141205                                                                                                                                                                                                                     │ cert-expiration-141205 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-907112 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │                     │
	│ start   │ -p embed-certs-599709 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-599709     │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:40:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:40:20.439712  726310 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:40:20.439839  726310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:40:20.439849  726310 out.go:374] Setting ErrFile to fd 2...
	I1017 19:40:20.439856  726310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:40:20.440087  726310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:40:20.440617  726310 out.go:368] Setting JSON to false
	I1017 19:40:20.441934  726310 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12159,"bootTime":1760717861,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:40:20.442043  726310 start.go:141] virtualization: kvm guest
	I1017 19:40:20.444263  726310 out.go:179] * [embed-certs-599709] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:40:20.446329  726310 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:40:20.446341  726310 notify.go:220] Checking for updates...
	I1017 19:40:20.449015  726310 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:40:20.451712  726310 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:40:20.453217  726310 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 19:40:20.454770  726310 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:40:20.456279  726310 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	
	
	==> CRI-O <==
	Oct 17 19:40:08 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:08.628801679Z" level=info msg="Starting container: 479b0b51a3d9c09acd1ad32d874bf79fbf0fa91b202a32f66036610c74644940" id=39023dba-50f4-41bd-a4d5-814e70125694 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:40:08 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:08.630630506Z" level=info msg="Started container" PID=2143 containerID=479b0b51a3d9c09acd1ad32d874bf79fbf0fa91b202a32f66036610c74644940 description=kube-system/coredns-5dd5756b68-gnqx4/coredns id=39023dba-50f4-41bd-a4d5-814e70125694 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2ca8e6230a4a2e87cadedd37dd2bd820bdf34690e000e8872a9d75ba29d070f4
	Oct 17 19:40:11 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:11.877763975Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b13cf641-03c9-468d-a5a2-1fb093b88044 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:40:11 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:11.877902747Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:40:12 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:12.000124293Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:9635569fb1e7d6b863e2981bfd9fa846ae4d7d9f308d3eedd988a2ab9db40f67 UID:0c75288d-bccd-48cb-8395-3ac83448ebf7 NetNS:/var/run/netns/32c752d2-a1d6-420b-beac-84f6db494f27 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b708}] Aliases:map[]}"
	Oct 17 19:40:12 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:12.0001675Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 17 19:40:12 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:12.057158424Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:9635569fb1e7d6b863e2981bfd9fa846ae4d7d9f308d3eedd988a2ab9db40f67 UID:0c75288d-bccd-48cb-8395-3ac83448ebf7 NetNS:/var/run/netns/32c752d2-a1d6-420b-beac-84f6db494f27 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b708}] Aliases:map[]}"
	Oct 17 19:40:12 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:12.057371611Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 17 19:40:12 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:12.058727305Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 19:40:12 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:12.060071341Z" level=info msg="Ran pod sandbox 9635569fb1e7d6b863e2981bfd9fa846ae4d7d9f308d3eedd988a2ab9db40f67 with infra container: default/busybox/POD" id=b13cf641-03c9-468d-a5a2-1fb093b88044 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:40:12 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:12.061594835Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6c6dbf33-5f64-4c90-ae49-291cf4039282 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:40:12 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:12.061793881Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=6c6dbf33-5f64-4c90-ae49-291cf4039282 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:40:12 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:12.061850064Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=6c6dbf33-5f64-4c90-ae49-291cf4039282 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:40:12 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:12.062422938Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=56d5de71-6ecc-4242-b488-8e998ee4c319 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:40:12 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:12.064023423Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 17 19:40:12 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:12.917884306Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=56d5de71-6ecc-4242-b488-8e998ee4c319 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:40:12 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:12.918964953Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=52b04dc0-8c72-40ba-b582-f6b9b9a213b4 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:40:12 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:12.92097461Z" level=info msg="Creating container: default/busybox/busybox" id=355dd976-2cd8-4eea-9240-352be950abc5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:40:12 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:12.92185696Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:40:12 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:12.926019682Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:40:12 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:12.926623486Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:40:12 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:12.962645818Z" level=info msg="Created container 41928cf6bfbcd5ab0308b4fcaeaa26a45e9686a311709ffc7bf26ab6c65dbc22: default/busybox/busybox" id=355dd976-2cd8-4eea-9240-352be950abc5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:40:12 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:12.963423809Z" level=info msg="Starting container: 41928cf6bfbcd5ab0308b4fcaeaa26a45e9686a311709ffc7bf26ab6c65dbc22" id=39b8a6fd-475d-47c0-84ec-2a7918032bdc name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:40:12 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:12.966110515Z" level=info msg="Started container" PID=2220 containerID=41928cf6bfbcd5ab0308b4fcaeaa26a45e9686a311709ffc7bf26ab6c65dbc22 description=default/busybox/busybox id=39b8a6fd-475d-47c0-84ec-2a7918032bdc name=/runtime.v1.RuntimeService/StartContainer sandboxID=9635569fb1e7d6b863e2981bfd9fa846ae4d7d9f308d3eedd988a2ab9db40f67
	Oct 17 19:40:19 old-k8s-version-907112 crio[778]: time="2025-10-17T19:40:19.67732182Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	41928cf6bfbcd       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   9635569fb1e7d       busybox                                          default
	479b0b51a3d9c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      12 seconds ago      Running             coredns                   0                   2ca8e6230a4a2       coredns-5dd5756b68-gnqx4                         kube-system
	fcf11e189a0e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   5d9fe9b2bd77b       storage-provisioner                              kube-system
	6b719c24e2e9e       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   156c6055869d4       kindnet-2zq9g                                    kube-system
	24b2f92eaf67b       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      25 seconds ago      Running             kube-proxy                0                   587447cb4b2d6       kube-proxy-lzbjz                                 kube-system
	699759e66473a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      44 seconds ago      Running             etcd                      0                   6500880895e9c       etcd-old-k8s-version-907112                      kube-system
	253fdc0b04ee8       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      44 seconds ago      Running             kube-controller-manager   0                   5ff0613a2f6b3       kube-controller-manager-old-k8s-version-907112   kube-system
	7ef3762fd406b       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      44 seconds ago      Running             kube-apiserver            0                   ebdca56e3cb59       kube-apiserver-old-k8s-version-907112            kube-system
	5dc941a791f61       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      44 seconds ago      Running             kube-scheduler            0                   f5fe5582940cf       kube-scheduler-old-k8s-version-907112            kube-system
	
	
	==> coredns [479b0b51a3d9c09acd1ad32d874bf79fbf0fa91b202a32f66036610c74644940] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43374 - 13060 "HINFO IN 2962879174723746054.7021570236334120872. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.104543894s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-907112
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-907112
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=old-k8s-version-907112
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_39_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:39:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-907112
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:40:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:40:12 +0000   Fri, 17 Oct 2025 19:39:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:40:12 +0000   Fri, 17 Oct 2025 19:39:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:40:12 +0000   Fri, 17 Oct 2025 19:39:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:40:12 +0000   Fri, 17 Oct 2025 19:40:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-907112
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                b9d63c36-87df-4fe2-81c2-a81cd9f5ae31
	  Boot ID:                    c8616e78-d085-41cd-a329-f2bcfd9cfa15
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-gnqx4                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-old-k8s-version-907112                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         39s
	  kube-system                 kindnet-2zq9g                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-old-k8s-version-907112             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-907112    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-lzbjz                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-old-k8s-version-907112             100m (1%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 39s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s   kubelet          Node old-k8s-version-907112 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s   kubelet          Node old-k8s-version-907112 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s   kubelet          Node old-k8s-version-907112 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node old-k8s-version-907112 event: Registered Node old-k8s-version-907112 in Controller
	  Normal  NodeReady                13s   kubelet          Node old-k8s-version-907112 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 d1 49 91 03 c2 08 06
	[  +0.000804] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 a9 2b 44 da ae 08 06
	[Oct17 18:59] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022229] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023876] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.024898] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +2.047801] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +4.031525] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:00] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +16.382262] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +32.252567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	
	
	==> etcd [699759e66473ae584d18d1f09e220a32d7c538ce89634ef2430503a1cc18e78c] <==
	{"level":"info","ts":"2025-10-17T19:39:37.214086Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-17T19:39:37.214875Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-17T19:39:37.215114Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-17T19:39:37.215147Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-17T19:39:37.215289Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-17T19:39:37.215315Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-17T19:39:37.501971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-17T19:39:37.50202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-17T19:39:37.502052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-10-17T19:39:37.50207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-10-17T19:39:37.502079Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-17T19:39:37.502091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-10-17T19:39:37.502104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-17T19:39:37.502966Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T19:39:37.50349Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-907112 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-17T19:39:37.503675Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T19:39:37.504978Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-17T19:39:37.505018Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-17T19:39:37.503758Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T19:39:37.505207Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T19:39:37.505259Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T19:39:37.503787Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T19:39:37.506414Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-17T19:39:37.506622Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-17T19:40:02.927617Z","caller":"traceutil/trace.go:171","msg":"trace[1167805285] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"177.253869ms","start":"2025-10-17T19:40:02.750339Z","end":"2025-10-17T19:40:02.927593Z","steps":["trace[1167805285] 'process raft request'  (duration: 177.109363ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:40:21 up  3:22,  0 user,  load average: 4.35, 3.28, 1.97
	Linux old-k8s-version-907112 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6b719c24e2e9e8a3df4e75dae22c498e37767285fd1eb9b2b59bcc2f4bfea210] <==
	I1017 19:39:57.809094       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 19:39:57.809438       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 19:39:57.809619       1 main.go:148] setting mtu 1500 for CNI 
	I1017 19:39:57.809639       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 19:39:57.809668       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T19:39:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 19:39:58.012551       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 19:39:58.012580       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 19:39:58.012592       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:39:58.013302       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 19:39:58.277266       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:39:58.277365       1 metrics.go:72] Registering metrics
	I1017 19:39:58.277473       1 controller.go:711] "Syncing nftables rules"
	I1017 19:40:08.021085       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:40:08.021139       1 main.go:301] handling current node
	I1017 19:40:18.015867       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:40:18.015899       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7ef3762fd406b2ce7a2d7e49d800da9ff6c4f0086327a24e77c674971e8ce22a] <==
	I1017 19:39:38.915443       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1017 19:39:38.915464       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1017 19:39:38.915480       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1017 19:39:38.915492       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 19:39:38.915504       1 aggregator.go:166] initial CRD sync complete...
	I1017 19:39:38.915518       1 autoregister_controller.go:141] Starting autoregister controller
	I1017 19:39:38.915523       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 19:39:38.915530       1 cache.go:39] Caches are synced for autoregister controller
	I1017 19:39:38.916759       1 controller.go:624] quota admission added evaluator for: namespaces
	I1017 19:39:39.094884       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 19:39:39.821965       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 19:39:39.825791       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 19:39:39.825808       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:39:40.292484       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 19:39:40.350416       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 19:39:40.426885       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 19:39:40.432827       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1017 19:39:40.433886       1 controller.go:624] quota admission added evaluator for: endpoints
	I1017 19:39:40.443148       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 19:39:40.856043       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1017 19:39:42.112890       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1017 19:39:42.124380       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 19:39:42.135738       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1017 19:39:55.105015       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1017 19:39:55.167256       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [253fdc0b04ee850706d94fa1eb6f43302b45f262b9b21f338921715d7a80153c] <==
	I1017 19:39:55.171947       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1017 19:39:55.187249       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-8gjvk"
	I1017 19:39:55.192509       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-gnqx4"
	I1017 19:39:55.200540       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="29.058046ms"
	I1017 19:39:55.208578       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.979422ms"
	I1017 19:39:55.208759       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="130.309µs"
	I1017 19:39:55.210751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.419µs"
	I1017 19:39:55.232981       1 shared_informer.go:318] Caches are synced for resource quota
	I1017 19:39:55.255781       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1017 19:39:55.260027       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1017 19:39:55.262234       1 shared_informer.go:318] Caches are synced for resource quota
	I1017 19:39:55.647196       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 19:39:55.652538       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 19:39:55.652579       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1017 19:39:55.704496       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1017 19:39:55.724551       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-8gjvk"
	I1017 19:39:55.732189       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.498007ms"
	I1017 19:39:55.739952       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.676293ms"
	I1017 19:39:55.740194       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.659µs"
	I1017 19:40:08.266726       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="169.846µs"
	I1017 19:40:08.291311       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="119.22µs"
	I1017 19:40:09.337858       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="114.125µs"
	I1017 19:40:09.385363       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.713443ms"
	I1017 19:40:09.385561       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="144.588µs"
	I1017 19:40:10.115527       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [24b2f92eaf67b175ee477c86005d5b1dc41c369bc7018b95ae9717bb4f7c30bf] <==
	I1017 19:39:55.544049       1 server_others.go:69] "Using iptables proxy"
	I1017 19:39:55.557107       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1017 19:39:55.596031       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:39:55.600087       1 server_others.go:152] "Using iptables Proxier"
	I1017 19:39:55.600143       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1017 19:39:55.600169       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1017 19:39:55.600211       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1017 19:39:55.600508       1 server.go:846] "Version info" version="v1.28.0"
	I1017 19:39:55.600529       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:39:55.601477       1 config.go:188] "Starting service config controller"
	I1017 19:39:55.601553       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1017 19:39:55.601606       1 config.go:97] "Starting endpoint slice config controller"
	I1017 19:39:55.601633       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1017 19:39:55.604175       1 config.go:315] "Starting node config controller"
	I1017 19:39:55.604968       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1017 19:39:55.702702       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1017 19:39:55.702772       1 shared_informer.go:318] Caches are synced for service config
	I1017 19:39:55.707854       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [5dc941a791f6162bc4333a41d8f2a11a3a19bd6190c5eff47fa5f3131c278ca3] <==
	W1017 19:39:38.869920       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1017 19:39:38.869962       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1017 19:39:38.870009       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1017 19:39:38.870038       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1017 19:39:38.870046       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1017 19:39:38.870013       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1017 19:39:38.870061       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1017 19:39:38.870068       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1017 19:39:39.783109       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1017 19:39:39.783164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1017 19:39:39.783124       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1017 19:39:39.783191       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1017 19:39:39.786592       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1017 19:39:39.786618       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1017 19:39:39.852062       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1017 19:39:39.852095       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1017 19:39:39.882662       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1017 19:39:39.882719       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1017 19:39:39.896273       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1017 19:39:39.896316       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1017 19:39:39.937790       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1017 19:39:39.937826       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1017 19:39:39.948317       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1017 19:39:39.948353       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1017 19:39:42.564358       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 17 19:39:55 old-k8s-version-907112 kubelet[1386]: I1017 19:39:55.126796    1386 topology_manager.go:215] "Topology Admit Handler" podUID="5a8911e1-bc4a-4439-a24e-fb2fcbba3a59" podNamespace="kube-system" podName="kindnet-2zq9g"
	Oct 17 19:39:55 old-k8s-version-907112 kubelet[1386]: I1017 19:39:55.127018    1386 topology_manager.go:215] "Topology Admit Handler" podUID="fa0af865-7908-432f-ad19-e9bfc1a59110" podNamespace="kube-system" podName="kube-proxy-lzbjz"
	Oct 17 19:39:55 old-k8s-version-907112 kubelet[1386]: I1017 19:39:55.158279    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cptz5\" (UniqueName: \"kubernetes.io/projected/fa0af865-7908-432f-ad19-e9bfc1a59110-kube-api-access-cptz5\") pod \"kube-proxy-lzbjz\" (UID: \"fa0af865-7908-432f-ad19-e9bfc1a59110\") " pod="kube-system/kube-proxy-lzbjz"
	Oct 17 19:39:55 old-k8s-version-907112 kubelet[1386]: I1017 19:39:55.158349    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa0af865-7908-432f-ad19-e9bfc1a59110-lib-modules\") pod \"kube-proxy-lzbjz\" (UID: \"fa0af865-7908-432f-ad19-e9bfc1a59110\") " pod="kube-system/kube-proxy-lzbjz"
	Oct 17 19:39:55 old-k8s-version-907112 kubelet[1386]: I1017 19:39:55.158471    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z5rd\" (UniqueName: \"kubernetes.io/projected/5a8911e1-bc4a-4439-a24e-fb2fcbba3a59-kube-api-access-5z5rd\") pod \"kindnet-2zq9g\" (UID: \"5a8911e1-bc4a-4439-a24e-fb2fcbba3a59\") " pod="kube-system/kindnet-2zq9g"
	Oct 17 19:39:55 old-k8s-version-907112 kubelet[1386]: I1017 19:39:55.158521    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa0af865-7908-432f-ad19-e9bfc1a59110-xtables-lock\") pod \"kube-proxy-lzbjz\" (UID: \"fa0af865-7908-432f-ad19-e9bfc1a59110\") " pod="kube-system/kube-proxy-lzbjz"
	Oct 17 19:39:55 old-k8s-version-907112 kubelet[1386]: I1017 19:39:55.158551    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5a8911e1-bc4a-4439-a24e-fb2fcbba3a59-cni-cfg\") pod \"kindnet-2zq9g\" (UID: \"5a8911e1-bc4a-4439-a24e-fb2fcbba3a59\") " pod="kube-system/kindnet-2zq9g"
	Oct 17 19:39:55 old-k8s-version-907112 kubelet[1386]: I1017 19:39:55.158577    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a8911e1-bc4a-4439-a24e-fb2fcbba3a59-xtables-lock\") pod \"kindnet-2zq9g\" (UID: \"5a8911e1-bc4a-4439-a24e-fb2fcbba3a59\") " pod="kube-system/kindnet-2zq9g"
	Oct 17 19:39:55 old-k8s-version-907112 kubelet[1386]: I1017 19:39:55.158602    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a8911e1-bc4a-4439-a24e-fb2fcbba3a59-lib-modules\") pod \"kindnet-2zq9g\" (UID: \"5a8911e1-bc4a-4439-a24e-fb2fcbba3a59\") " pod="kube-system/kindnet-2zq9g"
	Oct 17 19:39:55 old-k8s-version-907112 kubelet[1386]: I1017 19:39:55.158657    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fa0af865-7908-432f-ad19-e9bfc1a59110-kube-proxy\") pod \"kube-proxy-lzbjz\" (UID: \"fa0af865-7908-432f-ad19-e9bfc1a59110\") " pod="kube-system/kube-proxy-lzbjz"
	Oct 17 19:39:55 old-k8s-version-907112 kubelet[1386]: I1017 19:39:55.224358    1386 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 17 19:39:55 old-k8s-version-907112 kubelet[1386]: I1017 19:39:55.225338    1386 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 17 19:39:56 old-k8s-version-907112 kubelet[1386]: I1017 19:39:56.530925    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lzbjz" podStartSLOduration=1.5308664109999999 podCreationTimestamp="2025-10-17 19:39:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:39:56.2633607 +0000 UTC m=+14.176756290" watchObservedRunningTime="2025-10-17 19:39:56.530866411 +0000 UTC m=+14.444262001"
	Oct 17 19:39:58 old-k8s-version-907112 kubelet[1386]: I1017 19:39:58.288130    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-2zq9g" podStartSLOduration=1.2520753550000001 podCreationTimestamp="2025-10-17 19:39:55 +0000 UTC" firstStartedPulling="2025-10-17 19:39:55.440392467 +0000 UTC m=+13.353788042" lastFinishedPulling="2025-10-17 19:39:57.476374559 +0000 UTC m=+15.389770149" observedRunningTime="2025-10-17 19:39:58.28772142 +0000 UTC m=+16.201117015" watchObservedRunningTime="2025-10-17 19:39:58.288057462 +0000 UTC m=+16.201453052"
	Oct 17 19:40:08 old-k8s-version-907112 kubelet[1386]: I1017 19:40:08.238639    1386 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 17 19:40:08 old-k8s-version-907112 kubelet[1386]: I1017 19:40:08.264781    1386 topology_manager.go:215] "Topology Admit Handler" podUID="b27f6472-0799-450c-a27b-f6a0e8284284" podNamespace="kube-system" podName="storage-provisioner"
	Oct 17 19:40:08 old-k8s-version-907112 kubelet[1386]: I1017 19:40:08.266640    1386 topology_manager.go:215] "Topology Admit Handler" podUID="11bdcb3d-be4e-4373-aa89-087c1da542d4" podNamespace="kube-system" podName="coredns-5dd5756b68-gnqx4"
	Oct 17 19:40:08 old-k8s-version-907112 kubelet[1386]: I1017 19:40:08.336551    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b8qm\" (UniqueName: \"kubernetes.io/projected/b27f6472-0799-450c-a27b-f6a0e8284284-kube-api-access-4b8qm\") pod \"storage-provisioner\" (UID: \"b27f6472-0799-450c-a27b-f6a0e8284284\") " pod="kube-system/storage-provisioner"
	Oct 17 19:40:08 old-k8s-version-907112 kubelet[1386]: I1017 19:40:08.336608    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11bdcb3d-be4e-4373-aa89-087c1da542d4-config-volume\") pod \"coredns-5dd5756b68-gnqx4\" (UID: \"11bdcb3d-be4e-4373-aa89-087c1da542d4\") " pod="kube-system/coredns-5dd5756b68-gnqx4"
	Oct 17 19:40:08 old-k8s-version-907112 kubelet[1386]: I1017 19:40:08.336732    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b27f6472-0799-450c-a27b-f6a0e8284284-tmp\") pod \"storage-provisioner\" (UID: \"b27f6472-0799-450c-a27b-f6a0e8284284\") " pod="kube-system/storage-provisioner"
	Oct 17 19:40:08 old-k8s-version-907112 kubelet[1386]: I1017 19:40:08.336781    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzc28\" (UniqueName: \"kubernetes.io/projected/11bdcb3d-be4e-4373-aa89-087c1da542d4-kube-api-access-lzc28\") pod \"coredns-5dd5756b68-gnqx4\" (UID: \"11bdcb3d-be4e-4373-aa89-087c1da542d4\") " pod="kube-system/coredns-5dd5756b68-gnqx4"
	Oct 17 19:40:09 old-k8s-version-907112 kubelet[1386]: I1017 19:40:09.337973    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-gnqx4" podStartSLOduration=14.337907102 podCreationTimestamp="2025-10-17 19:39:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:40:09.337584569 +0000 UTC m=+27.250980159" watchObservedRunningTime="2025-10-17 19:40:09.337907102 +0000 UTC m=+27.251302692"
	Oct 17 19:40:11 old-k8s-version-907112 kubelet[1386]: I1017 19:40:11.575272    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.575198815 podCreationTimestamp="2025-10-17 19:39:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:40:09.38580076 +0000 UTC m=+27.299196395" watchObservedRunningTime="2025-10-17 19:40:11.575198815 +0000 UTC m=+29.488594459"
	Oct 17 19:40:11 old-k8s-version-907112 kubelet[1386]: I1017 19:40:11.575524    1386 topology_manager.go:215] "Topology Admit Handler" podUID="0c75288d-bccd-48cb-8395-3ac83448ebf7" podNamespace="default" podName="busybox"
	Oct 17 19:40:11 old-k8s-version-907112 kubelet[1386]: I1017 19:40:11.656730    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xft8m\" (UniqueName: \"kubernetes.io/projected/0c75288d-bccd-48cb-8395-3ac83448ebf7-kube-api-access-xft8m\") pod \"busybox\" (UID: \"0c75288d-bccd-48cb-8395-3ac83448ebf7\") " pod="default/busybox"
	
	
	==> storage-provisioner [fcf11e189a0e8d4eca861ddb75f9ac37919038e644351f0344192732f6685433] <==
	I1017 19:40:08.636471       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 19:40:08.647640       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 19:40:08.647702       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1017 19:40:08.655032       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 19:40:08.655271       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-907112_1f863b31-1b24-4a29-80d0-46422ca13f71!
	I1017 19:40:08.655211       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3f9a20f1-f928-44b0-afc3-41b87fa18958", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-907112_1f863b31-1b24-4a29-80d0-46422ca13f71 became leader
	I1017 19:40:08.755463       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-907112_1f863b31-1b24-4a29-80d0-46422ca13f71!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-907112 -n old-k8s-version-907112
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-907112 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-171807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-171807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (274.180362ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:40:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-171807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-171807 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-171807 describe deploy/metrics-server -n kube-system: exit status 1 (85.669666ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-171807 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-171807
helpers_test.go:243: (dbg) docker inspect no-preload-171807:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6738402fa93e143430ae2d5b8e2230a70ebaadd4b5f882988414cd70bfdd23a5",
	        "Created": "2025-10-17T19:39:49.424559642Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 720240,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:39:49.468987811Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/6738402fa93e143430ae2d5b8e2230a70ebaadd4b5f882988414cd70bfdd23a5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6738402fa93e143430ae2d5b8e2230a70ebaadd4b5f882988414cd70bfdd23a5/hostname",
	        "HostsPath": "/var/lib/docker/containers/6738402fa93e143430ae2d5b8e2230a70ebaadd4b5f882988414cd70bfdd23a5/hosts",
	        "LogPath": "/var/lib/docker/containers/6738402fa93e143430ae2d5b8e2230a70ebaadd4b5f882988414cd70bfdd23a5/6738402fa93e143430ae2d5b8e2230a70ebaadd4b5f882988414cd70bfdd23a5-json.log",
	        "Name": "/no-preload-171807",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-171807:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-171807",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6738402fa93e143430ae2d5b8e2230a70ebaadd4b5f882988414cd70bfdd23a5",
	                "LowerDir": "/var/lib/docker/overlay2/2465273c560fa18f3af90b746f46f6002d9f83f3da22434fa2cf4768a02a24de-init/diff:/var/lib/docker/overlay2/dbfb6a42e05d15debefb7c829b0dbabbe558b70da40f1ab4f30d27e0dda96088/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2465273c560fa18f3af90b746f46f6002d9f83f3da22434fa2cf4768a02a24de/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2465273c560fa18f3af90b746f46f6002d9f83f3da22434fa2cf4768a02a24de/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2465273c560fa18f3af90b746f46f6002d9f83f3da22434fa2cf4768a02a24de/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-171807",
	                "Source": "/var/lib/docker/volumes/no-preload-171807/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-171807",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-171807",
	                "name.minikube.sigs.k8s.io": "no-preload-171807",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e00461e65382daf81df41b5b1b23f8141f6343ba5ca3eac54b8c6f5492fdd27f",
	            "SandboxKey": "/var/run/docker/netns/e00461e65382",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-171807": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:dd:20:16:ab:ed",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4d20f1cdd8a9ad4b75566b03de0ba176c437b8596d360733d4786d1a9071e68d",
	                    "EndpointID": "7b4ae07ab6fce21aeb977f5d01be7b43b569d2f2ded421c5408ebee79adadc73",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-171807",
	                        "6738402fa93e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-171807 -n no-preload-171807
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-171807 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-171807 logs -n 25: (1.172201416s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-448344 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo containerd config dump                                                                                                                                                                                                  │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo crio config                                                                                                                                                                                                             │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ delete  │ -p cilium-448344                                                                                                                                                                                                                              │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:39 UTC │
	│ start   │ -p old-k8s-version-907112 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:40 UTC │
	│ start   │ -p pause-022753 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-022753           │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:39 UTC │
	│ pause   │ -p pause-022753 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-022753           │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ delete  │ -p pause-022753                                                                                                                                                                                                                               │ pause-022753           │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:39 UTC │
	│ start   │ -p no-preload-171807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-171807      │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:40 UTC │
	│ start   │ -p cert-expiration-141205 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-141205 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ delete  │ -p cert-expiration-141205                                                                                                                                                                                                                     │ cert-expiration-141205 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-907112 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │                     │
	│ start   │ -p embed-certs-599709 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-599709     │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │                     │
	│ stop    │ -p old-k8s-version-907112 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-907112 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ start   │ -p old-k8s-version-907112 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-171807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-171807      │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:40:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:40:38.459981  731035 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:40:38.460294  731035 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:40:38.460304  731035 out.go:374] Setting ErrFile to fd 2...
	I1017 19:40:38.460309  731035 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:40:38.460615  731035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:40:38.461265  731035 out.go:368] Setting JSON to false
	I1017 19:40:38.462865  731035 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12177,"bootTime":1760717861,"procs":337,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:40:38.463003  731035 start.go:141] virtualization: kvm guest
	I1017 19:40:38.464992  731035 out.go:179] * [old-k8s-version-907112] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:40:38.466269  731035 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:40:38.466355  731035 notify.go:220] Checking for updates...
	I1017 19:40:38.468982  731035 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:40:38.470395  731035 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:40:38.471595  731035 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	W1017 19:40:34.801222  719770 node_ready.go:57] node "no-preload-171807" has "Ready":"False" status (will retry)
	W1017 19:40:36.801703  719770 node_ready.go:57] node "no-preload-171807" has "Ready":"False" status (will retry)
	I1017 19:40:37.801552  719770 node_ready.go:49] node "no-preload-171807" is "Ready"
	I1017 19:40:37.801593  719770 node_ready.go:38] duration metric: took 14.004395173s for node "no-preload-171807" to be "Ready" ...
	I1017 19:40:37.801613  719770 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:40:37.801672  719770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:40:37.821054  719770 api_server.go:72] duration metric: took 14.437858177s to wait for apiserver process to appear ...
	I1017 19:40:37.821099  719770 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:40:37.821124  719770 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 19:40:37.828835  719770 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1017 19:40:37.830034  719770 api_server.go:141] control plane version: v1.34.1
	I1017 19:40:37.830064  719770 api_server.go:131] duration metric: took 8.957249ms to wait for apiserver health ...
	I1017 19:40:37.830073  719770 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:40:37.833746  719770 system_pods.go:59] 8 kube-system pods found
	I1017 19:40:37.833786  719770 system_pods.go:61] "coredns-66bc5c9577-gnx5k" [5cc39277-706b-4f4e-87c5-1af53966018f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:40:37.833795  719770 system_pods.go:61] "etcd-no-preload-171807" [1a0e3833-508c-4886-8024-5adacd924486] Running
	I1017 19:40:37.833804  719770 system_pods.go:61] "kindnet-tk5hv" [06dcd1bc-85e2-4dd3-aa6c-8869c5bdcc7f] Running
	I1017 19:40:37.833809  719770 system_pods.go:61] "kube-apiserver-no-preload-171807" [e63385fa-73db-4ce6-9c1f-8cf6e2088a50] Running
	I1017 19:40:37.833814  719770 system_pods.go:61] "kube-controller-manager-no-preload-171807" [b0498011-1f3a-4293-8719-b1e940e3a906] Running
	I1017 19:40:37.833818  719770 system_pods.go:61] "kube-proxy-cdbjg" [f638a0d8-f3ed-4ab9-89c8-b68a756e51e9] Running
	I1017 19:40:37.833823  719770 system_pods.go:61] "kube-scheduler-no-preload-171807" [0778bb3b-6ddd-4643-8d52-0ae7904294dd] Running
	I1017 19:40:37.833830  719770 system_pods.go:61] "storage-provisioner" [72f77177-3dcc-471b-a2a5-baaa7a566bc9] Pending
	I1017 19:40:37.833837  719770 system_pods.go:74] duration metric: took 3.757946ms to wait for pod list to return data ...
	I1017 19:40:37.833847  719770 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:40:37.837025  719770 default_sa.go:45] found service account: "default"
	I1017 19:40:37.837053  719770 default_sa.go:55] duration metric: took 3.197961ms for default service account to be created ...
	I1017 19:40:37.837065  719770 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 19:40:37.840496  719770 system_pods.go:86] 8 kube-system pods found
	I1017 19:40:37.840533  719770 system_pods.go:89] "coredns-66bc5c9577-gnx5k" [5cc39277-706b-4f4e-87c5-1af53966018f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:40:37.840542  719770 system_pods.go:89] "etcd-no-preload-171807" [1a0e3833-508c-4886-8024-5adacd924486] Running
	I1017 19:40:37.840552  719770 system_pods.go:89] "kindnet-tk5hv" [06dcd1bc-85e2-4dd3-aa6c-8869c5bdcc7f] Running
	I1017 19:40:37.840558  719770 system_pods.go:89] "kube-apiserver-no-preload-171807" [e63385fa-73db-4ce6-9c1f-8cf6e2088a50] Running
	I1017 19:40:37.840565  719770 system_pods.go:89] "kube-controller-manager-no-preload-171807" [b0498011-1f3a-4293-8719-b1e940e3a906] Running
	I1017 19:40:37.840570  719770 system_pods.go:89] "kube-proxy-cdbjg" [f638a0d8-f3ed-4ab9-89c8-b68a756e51e9] Running
	I1017 19:40:37.840577  719770 system_pods.go:89] "kube-scheduler-no-preload-171807" [0778bb3b-6ddd-4643-8d52-0ae7904294dd] Running
	I1017 19:40:37.840619  719770 system_pods.go:89] "storage-provisioner" [72f77177-3dcc-471b-a2a5-baaa7a566bc9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:40:37.840664  719770 retry.go:31] will retry after 309.993531ms: missing components: kube-dns
	I1017 19:40:38.165340  719770 system_pods.go:86] 8 kube-system pods found
	I1017 19:40:38.165377  719770 system_pods.go:89] "coredns-66bc5c9577-gnx5k" [5cc39277-706b-4f4e-87c5-1af53966018f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:40:38.165391  719770 system_pods.go:89] "etcd-no-preload-171807" [1a0e3833-508c-4886-8024-5adacd924486] Running
	I1017 19:40:38.165400  719770 system_pods.go:89] "kindnet-tk5hv" [06dcd1bc-85e2-4dd3-aa6c-8869c5bdcc7f] Running
	I1017 19:40:38.165406  719770 system_pods.go:89] "kube-apiserver-no-preload-171807" [e63385fa-73db-4ce6-9c1f-8cf6e2088a50] Running
	I1017 19:40:38.165411  719770 system_pods.go:89] "kube-controller-manager-no-preload-171807" [b0498011-1f3a-4293-8719-b1e940e3a906] Running
	I1017 19:40:38.165416  719770 system_pods.go:89] "kube-proxy-cdbjg" [f638a0d8-f3ed-4ab9-89c8-b68a756e51e9] Running
	I1017 19:40:38.165421  719770 system_pods.go:89] "kube-scheduler-no-preload-171807" [0778bb3b-6ddd-4643-8d52-0ae7904294dd] Running
	I1017 19:40:38.165428  719770 system_pods.go:89] "storage-provisioner" [72f77177-3dcc-471b-a2a5-baaa7a566bc9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:40:38.165447  719770 retry.go:31] will retry after 335.444042ms: missing components: kube-dns
	I1017 19:40:38.476919  731035 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:40:38.478286  731035 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:40:38.482473  731035 config.go:182] Loaded profile config "old-k8s-version-907112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1017 19:40:38.484598  731035 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1017 19:40:38.485963  731035 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:40:38.514958  731035 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:40:38.515069  731035 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:40:38.596707  731035 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-17 19:40:38.581842095 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:40:38.596878  731035 docker.go:318] overlay module found
	I1017 19:40:38.598618  731035 out.go:179] * Using the docker driver based on existing profile
	I1017 19:40:38.599921  731035 start.go:305] selected driver: docker
	I1017 19:40:38.599940  731035 start.go:925] validating driver "docker" against &{Name:old-k8s-version-907112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-907112 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:40:38.600057  731035 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:40:38.600900  731035 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:40:38.675003  731035 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-17 19:40:38.661216352 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:40:38.675416  731035 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:40:38.675452  731035 cni.go:84] Creating CNI manager for ""
	I1017 19:40:38.675514  731035 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:40:38.675555  731035 start.go:349] cluster config:
	{Name:old-k8s-version-907112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-907112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:40:38.677886  731035 out.go:179] * Starting "old-k8s-version-907112" primary control-plane node in "old-k8s-version-907112" cluster
	I1017 19:40:38.679168  731035 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:40:38.680293  731035 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:40:38.681516  731035 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1017 19:40:38.681573  731035 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1017 19:40:38.681615  731035 cache.go:58] Caching tarball of preloaded images
	I1017 19:40:38.681637  731035 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:40:38.681782  731035 preload.go:233] Found /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:40:38.681808  731035 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1017 19:40:38.681949  731035 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/old-k8s-version-907112/config.json ...
	I1017 19:40:38.707592  731035 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:40:38.707620  731035 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:40:38.707642  731035 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:40:38.707677  731035 start.go:360] acquireMachinesLock for old-k8s-version-907112: {Name:mk42f529ce77b781e034f627636252c3ef356cb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:40:38.707790  731035 start.go:364] duration metric: took 57.496µs to acquireMachinesLock for "old-k8s-version-907112"
	I1017 19:40:38.707814  731035 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:40:38.707825  731035 fix.go:54] fixHost starting: 
	I1017 19:40:38.708190  731035 cli_runner.go:164] Run: docker container inspect old-k8s-version-907112 --format={{.State.Status}}
	I1017 19:40:38.729399  731035 fix.go:112] recreateIfNeeded on old-k8s-version-907112: state=Stopped err=<nil>
	W1017 19:40:38.729432  731035 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:40:38.507014  719770 system_pods.go:86] 8 kube-system pods found
	I1017 19:40:38.507082  719770 system_pods.go:89] "coredns-66bc5c9577-gnx5k" [5cc39277-706b-4f4e-87c5-1af53966018f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:40:38.507093  719770 system_pods.go:89] "etcd-no-preload-171807" [1a0e3833-508c-4886-8024-5adacd924486] Running
	I1017 19:40:38.507102  719770 system_pods.go:89] "kindnet-tk5hv" [06dcd1bc-85e2-4dd3-aa6c-8869c5bdcc7f] Running
	I1017 19:40:38.507109  719770 system_pods.go:89] "kube-apiserver-no-preload-171807" [e63385fa-73db-4ce6-9c1f-8cf6e2088a50] Running
	I1017 19:40:38.507119  719770 system_pods.go:89] "kube-controller-manager-no-preload-171807" [b0498011-1f3a-4293-8719-b1e940e3a906] Running
	I1017 19:40:38.507124  719770 system_pods.go:89] "kube-proxy-cdbjg" [f638a0d8-f3ed-4ab9-89c8-b68a756e51e9] Running
	I1017 19:40:38.507129  719770 system_pods.go:89] "kube-scheduler-no-preload-171807" [0778bb3b-6ddd-4643-8d52-0ae7904294dd] Running
	I1017 19:40:38.507137  719770 system_pods.go:89] "storage-provisioner" [72f77177-3dcc-471b-a2a5-baaa7a566bc9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:40:38.507160  719770 retry.go:31] will retry after 358.565637ms: missing components: kube-dns
	I1017 19:40:38.871890  719770 system_pods.go:86] 8 kube-system pods found
	I1017 19:40:38.871930  719770 system_pods.go:89] "coredns-66bc5c9577-gnx5k" [5cc39277-706b-4f4e-87c5-1af53966018f] Running
	I1017 19:40:38.871939  719770 system_pods.go:89] "etcd-no-preload-171807" [1a0e3833-508c-4886-8024-5adacd924486] Running
	I1017 19:40:38.871945  719770 system_pods.go:89] "kindnet-tk5hv" [06dcd1bc-85e2-4dd3-aa6c-8869c5bdcc7f] Running
	I1017 19:40:38.871951  719770 system_pods.go:89] "kube-apiserver-no-preload-171807" [e63385fa-73db-4ce6-9c1f-8cf6e2088a50] Running
	I1017 19:40:38.871956  719770 system_pods.go:89] "kube-controller-manager-no-preload-171807" [b0498011-1f3a-4293-8719-b1e940e3a906] Running
	I1017 19:40:38.871961  719770 system_pods.go:89] "kube-proxy-cdbjg" [f638a0d8-f3ed-4ab9-89c8-b68a756e51e9] Running
	I1017 19:40:38.871966  719770 system_pods.go:89] "kube-scheduler-no-preload-171807" [0778bb3b-6ddd-4643-8d52-0ae7904294dd] Running
	I1017 19:40:38.871971  719770 system_pods.go:89] "storage-provisioner" [72f77177-3dcc-471b-a2a5-baaa7a566bc9] Running
	I1017 19:40:38.871982  719770 system_pods.go:126] duration metric: took 1.034910209s to wait for k8s-apps to be running ...
	I1017 19:40:38.871992  719770 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:40:38.872053  719770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:40:38.892331  719770 system_svc.go:56] duration metric: took 20.327094ms WaitForService to wait for kubelet
	I1017 19:40:38.892365  719770 kubeadm.go:586] duration metric: took 15.50917874s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:40:38.892389  719770 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:40:38.896315  719770 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 19:40:38.896349  719770 node_conditions.go:123] node cpu capacity is 8
	I1017 19:40:38.896402  719770 node_conditions.go:105] duration metric: took 4.006686ms to run NodePressure ...
	I1017 19:40:38.896419  719770 start.go:241] waiting for startup goroutines ...
	I1017 19:40:38.896433  719770 start.go:246] waiting for cluster config update ...
	I1017 19:40:38.896476  719770 start.go:255] writing updated cluster config ...
	I1017 19:40:38.896836  719770 ssh_runner.go:195] Run: rm -f paused
	I1017 19:40:38.902435  719770 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:40:38.907239  719770 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gnx5k" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:40:38.913201  719770 pod_ready.go:94] pod "coredns-66bc5c9577-gnx5k" is "Ready"
	I1017 19:40:38.913234  719770 pod_ready.go:86] duration metric: took 5.962981ms for pod "coredns-66bc5c9577-gnx5k" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:40:38.915722  719770 pod_ready.go:83] waiting for pod "etcd-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:40:38.920807  719770 pod_ready.go:94] pod "etcd-no-preload-171807" is "Ready"
	I1017 19:40:38.920840  719770 pod_ready.go:86] duration metric: took 5.088436ms for pod "etcd-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:40:38.923061  719770 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:40:38.932110  719770 pod_ready.go:94] pod "kube-apiserver-no-preload-171807" is "Ready"
	I1017 19:40:38.932149  719770 pod_ready.go:86] duration metric: took 9.060749ms for pod "kube-apiserver-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:40:38.937765  719770 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:40:39.309209  719770 pod_ready.go:94] pod "kube-controller-manager-no-preload-171807" is "Ready"
	I1017 19:40:39.309241  719770 pod_ready.go:86] duration metric: took 371.447386ms for pod "kube-controller-manager-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:40:39.507941  719770 pod_ready.go:83] waiting for pod "kube-proxy-cdbjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:40:39.907214  719770 pod_ready.go:94] pod "kube-proxy-cdbjg" is "Ready"
	I1017 19:40:39.907242  719770 pod_ready.go:86] duration metric: took 399.267932ms for pod "kube-proxy-cdbjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:40:40.107448  719770 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:40:40.507318  719770 pod_ready.go:94] pod "kube-scheduler-no-preload-171807" is "Ready"
	I1017 19:40:40.507358  719770 pod_ready.go:86] duration metric: took 399.881337ms for pod "kube-scheduler-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:40:40.507374  719770 pod_ready.go:40] duration metric: took 1.604895604s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:40:40.563815  719770 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 19:40:40.566623  719770 out.go:179] * Done! kubectl is now configured to use "no-preload-171807" cluster and "default" namespace by default
	I1017 19:40:38.948937  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:40:38.949491  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:40:38.949554  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:40:38.949617  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:40:38.981641  696997 cri.go:89] found id: "715d021be6158c8d9a1c1e34e51ed62746d6d4d5711b51182282458495195965"
	I1017 19:40:38.981668  696997 cri.go:89] found id: ""
	I1017 19:40:38.981701  696997 logs.go:282] 1 containers: [715d021be6158c8d9a1c1e34e51ed62746d6d4d5711b51182282458495195965]
	I1017 19:40:38.981761  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:40:38.986245  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:40:38.986336  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:40:39.021063  696997 cri.go:89] found id: ""
	I1017 19:40:39.021095  696997 logs.go:282] 0 containers: []
	W1017 19:40:39.021106  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:40:39.021119  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:40:39.021178  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:40:39.066137  696997 cri.go:89] found id: ""
	I1017 19:40:39.066167  696997 logs.go:282] 0 containers: []
	W1017 19:40:39.066178  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:40:39.066186  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:40:39.066244  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:40:39.105376  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:40:39.105405  696997 cri.go:89] found id: ""
	I1017 19:40:39.105417  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:40:39.105487  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:40:39.111077  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:40:39.111161  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:40:39.141779  696997 cri.go:89] found id: ""
	I1017 19:40:39.141806  696997 logs.go:282] 0 containers: []
	W1017 19:40:39.141816  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:40:39.141823  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:40:39.141930  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:40:39.181629  696997 cri.go:89] found id: "bf30afb18dad882e9d8e9532b83f12c06b6cafa8d7a9e795b88d9cb0c568bf7d"
	I1017 19:40:39.181665  696997 cri.go:89] found id: ""
	I1017 19:40:39.181676  696997 logs.go:282] 1 containers: [bf30afb18dad882e9d8e9532b83f12c06b6cafa8d7a9e795b88d9cb0c568bf7d]
	I1017 19:40:39.181747  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:40:39.188779  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:40:39.188863  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:40:39.225272  696997 cri.go:89] found id: ""
	I1017 19:40:39.225306  696997 logs.go:282] 0 containers: []
	W1017 19:40:39.225317  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:40:39.225332  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:40:39.225398  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:40:39.264644  696997 cri.go:89] found id: ""
	I1017 19:40:39.264676  696997 logs.go:282] 0 containers: []
	W1017 19:40:39.264716  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:40:39.264728  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:40:39.264743  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:40:39.351006  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:40:39.351050  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:40:39.401783  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:40:39.401820  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:40:39.520603  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:40:39.520647  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:40:39.540925  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:40:39.540965  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:40:39.616773  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:40:39.616798  696997 logs.go:123] Gathering logs for kube-apiserver [715d021be6158c8d9a1c1e34e51ed62746d6d4d5711b51182282458495195965] ...
	I1017 19:40:39.616831  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 715d021be6158c8d9a1c1e34e51ed62746d6d4d5711b51182282458495195965"
	I1017 19:40:39.658233  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:40:39.658272  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:40:39.709671  696997 logs.go:123] Gathering logs for kube-controller-manager [bf30afb18dad882e9d8e9532b83f12c06b6cafa8d7a9e795b88d9cb0c568bf7d] ...
	I1017 19:40:39.709726  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf30afb18dad882e9d8e9532b83f12c06b6cafa8d7a9e795b88d9cb0c568bf7d"
	I1017 19:40:42.241516  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:40:42.242066  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:40:42.242151  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:40:42.242236  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:40:42.275100  696997 cri.go:89] found id: "715d021be6158c8d9a1c1e34e51ed62746d6d4d5711b51182282458495195965"
	I1017 19:40:42.275134  696997 cri.go:89] found id: ""
	I1017 19:40:42.275144  696997 logs.go:282] 1 containers: [715d021be6158c8d9a1c1e34e51ed62746d6d4d5711b51182282458495195965]
	I1017 19:40:42.275208  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:40:42.279893  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:40:42.279961  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:40:42.313283  696997 cri.go:89] found id: ""
	I1017 19:40:42.313309  696997 logs.go:282] 0 containers: []
	W1017 19:40:42.313316  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:40:42.313322  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:40:42.313384  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:40:42.343674  696997 cri.go:89] found id: ""
	I1017 19:40:42.343720  696997 logs.go:282] 0 containers: []
	W1017 19:40:42.343731  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:40:42.343740  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:40:42.343817  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:40:42.374399  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:40:42.374425  696997 cri.go:89] found id: ""
	I1017 19:40:42.374435  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:40:42.374499  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:40:42.379179  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:40:42.379265  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:40:42.511566  726310 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 19:40:42.511654  726310 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 19:40:42.511792  726310 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 19:40:42.511861  726310 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1017 19:40:42.511904  726310 kubeadm.go:318] OS: Linux
	I1017 19:40:42.511961  726310 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 19:40:42.512018  726310 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 19:40:42.512077  726310 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 19:40:42.512135  726310 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 19:40:42.512192  726310 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 19:40:42.512249  726310 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 19:40:42.512310  726310 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 19:40:42.512381  726310 kubeadm.go:318] CGROUPS_IO: enabled
	I1017 19:40:42.512479  726310 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 19:40:42.512600  726310 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 19:40:42.512729  726310 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 19:40:42.512839  726310 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 19:40:42.515074  726310 out.go:252]   - Generating certificates and keys ...
	I1017 19:40:42.515263  726310 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 19:40:42.515376  726310 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 19:40:42.515461  726310 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 19:40:42.515533  726310 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 19:40:42.515741  726310 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 19:40:42.515821  726310 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 19:40:42.515893  726310 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 19:40:42.516083  726310 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-599709 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1017 19:40:42.516172  726310 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 19:40:42.516343  726310 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-599709 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1017 19:40:42.516428  726310 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 19:40:42.516515  726310 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 19:40:42.516578  726310 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 19:40:42.516660  726310 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 19:40:42.516781  726310 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 19:40:42.516882  726310 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 19:40:42.516980  726310 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 19:40:42.517082  726310 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 19:40:42.517163  726310 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 19:40:42.517277  726310 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 19:40:42.517389  726310 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 19:40:42.519632  726310 out.go:252]   - Booting up control plane ...
	I1017 19:40:42.519767  726310 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 19:40:42.519879  726310 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 19:40:42.519978  726310 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 19:40:42.520114  726310 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 19:40:42.520242  726310 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 19:40:42.520402  726310 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 19:40:42.520532  726310 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 19:40:42.520615  726310 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 19:40:42.520804  726310 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 19:40:42.520891  726310 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 19:40:42.520977  726310 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501930207s
	I1017 19:40:42.521114  726310 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 19:40:42.521247  726310 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1017 19:40:42.521387  726310 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 19:40:42.521519  726310 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 19:40:42.521586  726310 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.540582723s
	I1017 19:40:42.521644  726310 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.890559401s
	I1017 19:40:42.521727  726310 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501338405s
	I1017 19:40:42.521858  726310 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 19:40:42.522028  726310 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 19:40:42.522111  726310 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 19:40:42.522402  726310 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-599709 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 19:40:42.522491  726310 kubeadm.go:318] [bootstrap-token] Using token: kcip3a.983xhyt431jix4zh
	I1017 19:40:42.523823  726310 out.go:252]   - Configuring RBAC rules ...
	I1017 19:40:42.523960  726310 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 19:40:42.524086  726310 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 19:40:42.524288  726310 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 19:40:42.524473  726310 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 19:40:42.524638  726310 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 19:40:42.524773  726310 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 19:40:42.524928  726310 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 19:40:42.524979  726310 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 19:40:42.525060  726310 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 19:40:42.525076  726310 kubeadm.go:318] 
	I1017 19:40:42.525159  726310 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 19:40:42.525169  726310 kubeadm.go:318] 
	I1017 19:40:42.525267  726310 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 19:40:42.525282  726310 kubeadm.go:318] 
	I1017 19:40:42.525313  726310 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 19:40:42.525387  726310 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 19:40:42.525457  726310 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 19:40:42.525468  726310 kubeadm.go:318] 
	I1017 19:40:42.525533  726310 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 19:40:42.525542  726310 kubeadm.go:318] 
	I1017 19:40:42.525600  726310 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 19:40:42.525611  726310 kubeadm.go:318] 
	I1017 19:40:42.525709  726310 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 19:40:42.525808  726310 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 19:40:42.525893  726310 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 19:40:42.525899  726310 kubeadm.go:318] 
	I1017 19:40:42.526002  726310 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 19:40:42.526098  726310 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 19:40:42.526112  726310 kubeadm.go:318] 
	I1017 19:40:42.526216  726310 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token kcip3a.983xhyt431jix4zh \
	I1017 19:40:42.526348  726310 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ae4b222593b9932ac318f80ad834fe09d4c8ed481133016b5c410bf2757b648e \
	I1017 19:40:42.526382  726310 kubeadm.go:318] 	--control-plane 
	I1017 19:40:42.526390  726310 kubeadm.go:318] 
	I1017 19:40:42.526510  726310 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 19:40:42.526520  726310 kubeadm.go:318] 
	I1017 19:40:42.526633  726310 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token kcip3a.983xhyt431jix4zh \
	I1017 19:40:42.526819  726310 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ae4b222593b9932ac318f80ad834fe09d4c8ed481133016b5c410bf2757b648e 
	I1017 19:40:42.526838  726310 cni.go:84] Creating CNI manager for ""
	I1017 19:40:42.526848  726310 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:40:42.529241  726310 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 19:40:38.731613  731035 out.go:252] * Restarting existing docker container for "old-k8s-version-907112" ...
	I1017 19:40:38.731722  731035 cli_runner.go:164] Run: docker start old-k8s-version-907112
	I1017 19:40:39.047539  731035 cli_runner.go:164] Run: docker container inspect old-k8s-version-907112 --format={{.State.Status}}
	I1017 19:40:39.072740  731035 kic.go:430] container "old-k8s-version-907112" state is running.
	I1017 19:40:39.073276  731035 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-907112
	I1017 19:40:39.096361  731035 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/old-k8s-version-907112/config.json ...
	I1017 19:40:39.096664  731035 machine.go:93] provisionDockerMachine start ...
	I1017 19:40:39.096792  731035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907112
	I1017 19:40:39.119813  731035 main.go:141] libmachine: Using SSH client type: native
	I1017 19:40:39.120092  731035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1017 19:40:39.120111  731035 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:40:39.120936  731035 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37424->127.0.0.1:33438: read: connection reset by peer
	I1017 19:40:42.261970  731035 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-907112
	
	I1017 19:40:42.262005  731035 ubuntu.go:182] provisioning hostname "old-k8s-version-907112"
	I1017 19:40:42.262073  731035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907112
	I1017 19:40:42.282726  731035 main.go:141] libmachine: Using SSH client type: native
	I1017 19:40:42.282946  731035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1017 19:40:42.282961  731035 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-907112 && echo "old-k8s-version-907112" | sudo tee /etc/hostname
	I1017 19:40:42.439707  731035 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-907112
	
	I1017 19:40:42.439794  731035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907112
	I1017 19:40:42.459642  731035 main.go:141] libmachine: Using SSH client type: native
	I1017 19:40:42.459920  731035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1017 19:40:42.459944  731035 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-907112' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-907112/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-907112' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:40:42.608356  731035 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:40:42.608400  731035 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-492109/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-492109/.minikube}
	I1017 19:40:42.608426  731035 ubuntu.go:190] setting up certificates
	I1017 19:40:42.608440  731035 provision.go:84] configureAuth start
	I1017 19:40:42.608503  731035 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-907112
	I1017 19:40:42.630778  731035 provision.go:143] copyHostCerts
	I1017 19:40:42.630854  731035 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem, removing ...
	I1017 19:40:42.630876  731035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem
	I1017 19:40:42.630969  731035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem (1679 bytes)
	I1017 19:40:42.631128  731035 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem, removing ...
	I1017 19:40:42.631156  731035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem
	I1017 19:40:42.631211  731035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem (1078 bytes)
	I1017 19:40:42.631292  731035 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem, removing ...
	I1017 19:40:42.631303  731035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem
	I1017 19:40:42.631340  731035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem (1123 bytes)
	I1017 19:40:42.631418  731035 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-907112 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-907112]
	I1017 19:40:42.938228  731035 provision.go:177] copyRemoteCerts
	I1017 19:40:42.938299  731035 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:40:42.938338  731035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907112
	I1017 19:40:42.960459  731035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/old-k8s-version-907112/id_rsa Username:docker}
	I1017 19:40:43.060843  731035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 19:40:43.079974  731035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1017 19:40:43.099223  731035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:40:43.118174  731035 provision.go:87] duration metric: took 509.713258ms to configureAuth
	I1017 19:40:43.118209  731035 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:40:43.118442  731035 config.go:182] Loaded profile config "old-k8s-version-907112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1017 19:40:43.118562  731035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907112
	I1017 19:40:43.137833  731035 main.go:141] libmachine: Using SSH client type: native
	I1017 19:40:43.138054  731035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1017 19:40:43.138076  731035 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:40:43.450553  731035 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:40:43.450583  731035 machine.go:96] duration metric: took 4.35389944s to provisionDockerMachine
	I1017 19:40:43.450597  731035 start.go:293] postStartSetup for "old-k8s-version-907112" (driver="docker")
	I1017 19:40:43.450610  731035 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:40:43.450674  731035 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:40:43.450807  731035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907112
	I1017 19:40:43.471207  731035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/old-k8s-version-907112/id_rsa Username:docker}
	I1017 19:40:43.570122  731035 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:40:43.574241  731035 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:40:43.574269  731035 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:40:43.574281  731035 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/addons for local assets ...
	I1017 19:40:43.574330  731035 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/files for local assets ...
	I1017 19:40:43.574416  731035 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem -> 4957252.pem in /etc/ssl/certs
	I1017 19:40:43.574542  731035 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:40:43.583883  731035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem --> /etc/ssl/certs/4957252.pem (1708 bytes)
	I1017 19:40:43.603561  731035 start.go:296] duration metric: took 152.947363ms for postStartSetup
	I1017 19:40:43.603655  731035 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:40:43.603727  731035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907112
	I1017 19:40:43.622862  731035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/old-k8s-version-907112/id_rsa Username:docker}
	I1017 19:40:43.718294  731035 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:40:43.723483  731035 fix.go:56] duration metric: took 5.015649842s for fixHost
	I1017 19:40:43.723516  731035 start.go:83] releasing machines lock for "old-k8s-version-907112", held for 5.015712441s
	I1017 19:40:43.723592  731035 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-907112
	I1017 19:40:43.744337  731035 ssh_runner.go:195] Run: cat /version.json
	I1017 19:40:43.744405  731035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907112
	I1017 19:40:43.744409  731035 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:40:43.744474  731035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907112
	I1017 19:40:43.765064  731035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/old-k8s-version-907112/id_rsa Username:docker}
	I1017 19:40:43.765155  731035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/old-k8s-version-907112/id_rsa Username:docker}
	I1017 19:40:43.936189  731035 ssh_runner.go:195] Run: systemctl --version
	I1017 19:40:43.943754  731035 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:40:43.984493  731035 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:40:43.989940  731035 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:40:43.990026  731035 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:40:43.998635  731035 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:40:43.998662  731035 start.go:495] detecting cgroup driver to use...
	I1017 19:40:43.998727  731035 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 19:40:43.998789  731035 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:40:44.014643  731035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:40:44.028557  731035 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:40:44.028609  731035 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:40:44.044649  731035 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:40:44.058826  731035 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:40:44.147964  731035 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:40:44.239869  731035 docker.go:234] disabling docker service ...
	I1017 19:40:44.239956  731035 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:40:44.256130  731035 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:40:44.271270  731035 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:40:44.354353  731035 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:40:44.447785  731035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:40:44.464396  731035 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:40:44.481294  731035 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1017 19:40:44.481375  731035 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:40:44.491020  731035 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 19:40:44.491093  731035 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:40:44.500889  731035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:40:44.511086  731035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:40:44.521145  731035 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:40:44.530992  731035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:40:44.540863  731035 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:40:44.550328  731035 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:40:44.560835  731035 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:40:44.569168  731035 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:40:44.578611  731035 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:40:44.662623  731035 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:40:44.786591  731035 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:40:44.786654  731035 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:40:44.790995  731035 start.go:563] Will wait 60s for crictl version
	I1017 19:40:44.791072  731035 ssh_runner.go:195] Run: which crictl
	I1017 19:40:44.795267  731035 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:40:44.822357  731035 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:40:44.822458  731035 ssh_runner.go:195] Run: crio --version
	I1017 19:40:44.853391  731035 ssh_runner.go:195] Run: crio --version
	I1017 19:40:44.887391  731035 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1017 19:40:42.530466  726310 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 19:40:42.535752  726310 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 19:40:42.535773  726310 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 19:40:42.551416  726310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 19:40:42.798023  726310 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 19:40:42.798089  726310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:40:42.798179  726310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-599709 minikube.k8s.io/updated_at=2025_10_17T19_40_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d minikube.k8s.io/name=embed-certs-599709 minikube.k8s.io/primary=true
	I1017 19:40:42.890581  726310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:40:42.890719  726310 ops.go:34] apiserver oom_adj: -16
	I1017 19:40:43.390651  726310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:40:43.891618  726310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:40:44.390855  726310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:40:44.890633  726310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:40:45.390749  726310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:40:44.888881  731035 cli_runner.go:164] Run: docker network inspect old-k8s-version-907112 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:40:44.906939  731035 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1017 19:40:44.911869  731035 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:40:44.923420  731035 kubeadm.go:883] updating cluster {Name:old-k8s-version-907112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-907112 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:40:44.923569  731035 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1017 19:40:44.923630  731035 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:40:44.961812  731035 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:40:44.961834  731035 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:40:44.961893  731035 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:40:44.990628  731035 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:40:44.990652  731035 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:40:44.990659  731035 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1017 19:40:44.990791  731035 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-907112 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-907112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:40:44.990882  731035 ssh_runner.go:195] Run: crio config
	I1017 19:40:45.040628  731035 cni.go:84] Creating CNI manager for ""
	I1017 19:40:45.040654  731035 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:40:45.040672  731035 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:40:45.040712  731035 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-907112 NodeName:old-k8s-version-907112 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:40:45.040860  731035 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-907112"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:40:45.040938  731035 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1017 19:40:45.049600  731035 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:40:45.049664  731035 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 19:40:45.058186  731035 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1017 19:40:45.072405  731035 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:40:45.086855  731035 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1017 19:40:45.101360  731035 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1017 19:40:45.105796  731035 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:40:45.117388  731035 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:40:45.202770  731035 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:40:45.227483  731035 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/old-k8s-version-907112 for IP: 192.168.85.2
	I1017 19:40:45.227510  731035 certs.go:195] generating shared ca certs ...
	I1017 19:40:45.227541  731035 certs.go:227] acquiring lock for ca certs: {Name:mkc97483d62151ba5c32d923dd19e3e2b3661468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:40:45.227720  731035 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key
	I1017 19:40:45.227764  731035 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key
	I1017 19:40:45.227774  731035 certs.go:257] generating profile certs ...
	I1017 19:40:45.227861  731035 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/old-k8s-version-907112/client.key
	I1017 19:40:45.227910  731035 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/old-k8s-version-907112/apiserver.key.fa0ce7ba
	I1017 19:40:45.227957  731035 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/old-k8s-version-907112/proxy-client.key
	I1017 19:40:45.228062  731035 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725.pem (1338 bytes)
	W1017 19:40:45.228088  731035 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725_empty.pem, impossibly tiny 0 bytes
	I1017 19:40:45.228095  731035 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 19:40:45.228117  731035 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem (1078 bytes)
	I1017 19:40:45.228139  731035 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:40:45.228164  731035 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem (1679 bytes)
	I1017 19:40:45.228213  731035 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem (1708 bytes)
	I1017 19:40:45.230763  731035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:40:45.251103  731035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:40:45.272120  731035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:40:45.293643  731035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 19:40:45.320100  731035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/old-k8s-version-907112/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1017 19:40:45.340883  731035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/old-k8s-version-907112/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:40:45.359802  731035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/old-k8s-version-907112/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:40:45.379269  731035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/old-k8s-version-907112/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 19:40:45.398502  731035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725.pem --> /usr/share/ca-certificates/495725.pem (1338 bytes)
	I1017 19:40:45.420065  731035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem --> /usr/share/ca-certificates/4957252.pem (1708 bytes)
	I1017 19:40:45.441879  731035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:40:45.462468  731035 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:40:45.476547  731035 ssh_runner.go:195] Run: openssl version
	I1017 19:40:45.483569  731035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:40:45.493438  731035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:40:45.497629  731035 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:40:45.497718  731035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:40:45.538897  731035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:40:45.548493  731035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/495725.pem && ln -fs /usr/share/ca-certificates/495725.pem /etc/ssl/certs/495725.pem"
	I1017 19:40:45.557999  731035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/495725.pem
	I1017 19:40:45.562304  731035 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/495725.pem
	I1017 19:40:45.562363  731035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/495725.pem
	I1017 19:40:45.603746  731035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/495725.pem /etc/ssl/certs/51391683.0"
	I1017 19:40:45.613497  731035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4957252.pem && ln -fs /usr/share/ca-certificates/4957252.pem /etc/ssl/certs/4957252.pem"
	I1017 19:40:45.624188  731035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4957252.pem
	I1017 19:40:45.629451  731035 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/4957252.pem
	I1017 19:40:45.629524  731035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4957252.pem
	I1017 19:40:45.668922  731035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4957252.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:40:45.678815  731035 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:40:45.683531  731035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:40:45.724063  731035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:40:45.775266  731035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:40:45.826392  731035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:40:45.885582  731035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:40:45.951423  731035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:40:46.014489  731035 kubeadm.go:400] StartCluster: {Name:old-k8s-version-907112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-907112 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:40:46.014708  731035 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:40:46.014816  731035 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:40:46.054906  731035 cri.go:89] found id: "054c0ba11919a27c613a43b0283529cadb5c43fac2b53a9bac2aaa468326a52d"
	I1017 19:40:46.054934  731035 cri.go:89] found id: "6f75954cb97693039a7a28b7e532c1cda8aaba2ac4c24c3d853c709e351d3c90"
	I1017 19:40:46.054939  731035 cri.go:89] found id: "059b93c2a1d4e2bc4bdba5fd8d096798638e1a2899fc8316153e0e2480d7fc01"
	I1017 19:40:46.054944  731035 cri.go:89] found id: "0aa671be2daa82154fa84103fd15b8447d2b25c3049ce697edb71872df1653db"
	I1017 19:40:46.054949  731035 cri.go:89] found id: ""
	I1017 19:40:46.054997  731035 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 19:40:46.068146  731035 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:40:46Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:40:46.068227  731035 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 19:40:46.078474  731035 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 19:40:46.078500  731035 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 19:40:46.078559  731035 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 19:40:46.089422  731035 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:40:46.090396  731035 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-907112" does not appear in /home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:40:46.090996  731035 kubeconfig.go:62] /home/jenkins/minikube-integration/21753-492109/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-907112" cluster setting kubeconfig missing "old-k8s-version-907112" context setting]
	I1017 19:40:46.091864  731035 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/kubeconfig: {Name:mkc99c1a086f83f30612e2820a6063c20b9217b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:40:46.093858  731035 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 19:40:46.103735  731035 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1017 19:40:46.103783  731035 kubeadm.go:601] duration metric: took 25.275002ms to restartPrimaryControlPlane
	I1017 19:40:46.103797  731035 kubeadm.go:402] duration metric: took 89.32236ms to StartCluster
	I1017 19:40:46.103822  731035 settings.go:142] acquiring lock: {Name:mkb8ebc6edbbb6915dd74086f502bcc2721555a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:40:46.103906  731035 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:40:46.105393  731035 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/kubeconfig: {Name:mkc99c1a086f83f30612e2820a6063c20b9217b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:40:46.105700  731035 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:40:46.105763  731035 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 19:40:46.105873  731035 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-907112"
	I1017 19:40:46.105894  731035 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-907112"
	W1017 19:40:46.105917  731035 addons.go:247] addon storage-provisioner should already be in state true
	I1017 19:40:46.105929  731035 config.go:182] Loaded profile config "old-k8s-version-907112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1017 19:40:46.105957  731035 host.go:66] Checking if "old-k8s-version-907112" exists ...
	I1017 19:40:46.105990  731035 addons.go:69] Setting dashboard=true in profile "old-k8s-version-907112"
	I1017 19:40:46.106005  731035 addons.go:238] Setting addon dashboard=true in "old-k8s-version-907112"
	W1017 19:40:46.106012  731035 addons.go:247] addon dashboard should already be in state true
	I1017 19:40:46.106038  731035 host.go:66] Checking if "old-k8s-version-907112" exists ...
	I1017 19:40:46.106154  731035 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-907112"
	I1017 19:40:46.106179  731035 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-907112"
	I1017 19:40:46.106483  731035 cli_runner.go:164] Run: docker container inspect old-k8s-version-907112 --format={{.State.Status}}
	I1017 19:40:46.106535  731035 cli_runner.go:164] Run: docker container inspect old-k8s-version-907112 --format={{.State.Status}}
	I1017 19:40:46.106671  731035 cli_runner.go:164] Run: docker container inspect old-k8s-version-907112 --format={{.State.Status}}
	I1017 19:40:46.108230  731035 out.go:179] * Verifying Kubernetes components...
	I1017 19:40:46.110344  731035 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:40:46.135021  731035 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1017 19:40:46.137402  731035 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 19:40:46.137435  731035 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1017 19:40:42.407905  696997 cri.go:89] found id: ""
	I1017 19:40:42.407932  696997 logs.go:282] 0 containers: []
	W1017 19:40:42.407943  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:40:42.407951  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:40:42.408017  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:40:42.439533  696997 cri.go:89] found id: "bf30afb18dad882e9d8e9532b83f12c06b6cafa8d7a9e795b88d9cb0c568bf7d"
	I1017 19:40:42.439563  696997 cri.go:89] found id: ""
	I1017 19:40:42.439575  696997 logs.go:282] 1 containers: [bf30afb18dad882e9d8e9532b83f12c06b6cafa8d7a9e795b88d9cb0c568bf7d]
	I1017 19:40:42.439640  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:40:42.444348  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:40:42.444412  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:40:42.479003  696997 cri.go:89] found id: ""
	I1017 19:40:42.479029  696997 logs.go:282] 0 containers: []
	W1017 19:40:42.479036  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:40:42.479042  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:40:42.479104  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:40:42.510918  696997 cri.go:89] found id: ""
	I1017 19:40:42.510946  696997 logs.go:282] 0 containers: []
	W1017 19:40:42.510957  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:40:42.510970  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:40:42.510987  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:40:42.578614  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:40:42.578638  696997 logs.go:123] Gathering logs for kube-apiserver [715d021be6158c8d9a1c1e34e51ed62746d6d4d5711b51182282458495195965] ...
	I1017 19:40:42.578661  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 715d021be6158c8d9a1c1e34e51ed62746d6d4d5711b51182282458495195965"
	I1017 19:40:42.616514  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:40:42.616550  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:40:42.686666  696997 logs.go:123] Gathering logs for kube-controller-manager [bf30afb18dad882e9d8e9532b83f12c06b6cafa8d7a9e795b88d9cb0c568bf7d] ...
	I1017 19:40:42.686725  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf30afb18dad882e9d8e9532b83f12c06b6cafa8d7a9e795b88d9cb0c568bf7d"
	I1017 19:40:42.718305  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:40:42.718362  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:40:42.786773  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:40:42.786813  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:40:42.838340  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:40:42.838372  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:40:42.949485  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:40:42.949526  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:40:45.473758  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:40:45.474256  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:40:45.474321  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:40:45.474397  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:40:45.505320  696997 cri.go:89] found id: "715d021be6158c8d9a1c1e34e51ed62746d6d4d5711b51182282458495195965"
	I1017 19:40:45.505341  696997 cri.go:89] found id: ""
	I1017 19:40:45.505349  696997 logs.go:282] 1 containers: [715d021be6158c8d9a1c1e34e51ed62746d6d4d5711b51182282458495195965]
	I1017 19:40:45.505422  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:40:45.509708  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:40:45.509787  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:40:45.539635  696997 cri.go:89] found id: ""
	I1017 19:40:45.539660  696997 logs.go:282] 0 containers: []
	W1017 19:40:45.539672  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:40:45.539692  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:40:45.539752  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:40:45.570021  696997 cri.go:89] found id: ""
	I1017 19:40:45.570046  696997 logs.go:282] 0 containers: []
	W1017 19:40:45.570054  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:40:45.570061  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:40:45.570116  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:40:45.599908  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:40:45.599933  696997 cri.go:89] found id: ""
	I1017 19:40:45.599945  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:40:45.600007  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:40:45.604706  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:40:45.604774  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:40:45.635621  696997 cri.go:89] found id: ""
	I1017 19:40:45.635649  696997 logs.go:282] 0 containers: []
	W1017 19:40:45.635661  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:40:45.635669  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:40:45.635751  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:40:45.666268  696997 cri.go:89] found id: "bf30afb18dad882e9d8e9532b83f12c06b6cafa8d7a9e795b88d9cb0c568bf7d"
	I1017 19:40:45.666289  696997 cri.go:89] found id: ""
	I1017 19:40:45.666298  696997 logs.go:282] 1 containers: [bf30afb18dad882e9d8e9532b83f12c06b6cafa8d7a9e795b88d9cb0c568bf7d]
	I1017 19:40:45.666364  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:40:45.671067  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:40:45.671158  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:40:45.701708  696997 cri.go:89] found id: ""
	I1017 19:40:45.701740  696997 logs.go:282] 0 containers: []
	W1017 19:40:45.701750  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:40:45.701756  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:40:45.701813  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:40:45.734573  696997 cri.go:89] found id: ""
	I1017 19:40:45.734605  696997 logs.go:282] 0 containers: []
	W1017 19:40:45.734616  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:40:45.734629  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:40:45.734645  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:40:45.812397  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:40:45.812427  696997 logs.go:123] Gathering logs for kube-apiserver [715d021be6158c8d9a1c1e34e51ed62746d6d4d5711b51182282458495195965] ...
	I1017 19:40:45.812451  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 715d021be6158c8d9a1c1e34e51ed62746d6d4d5711b51182282458495195965"
	I1017 19:40:45.855667  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:40:45.855772  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:40:45.934895  696997 logs.go:123] Gathering logs for kube-controller-manager [bf30afb18dad882e9d8e9532b83f12c06b6cafa8d7a9e795b88d9cb0c568bf7d] ...
	I1017 19:40:45.934946  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf30afb18dad882e9d8e9532b83f12c06b6cafa8d7a9e795b88d9cb0c568bf7d"
	I1017 19:40:45.974831  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:40:45.974876  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:40:46.045737  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:40:46.045793  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:40:46.088126  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:40:46.088159  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:40:46.236502  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:40:46.236565  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:40:45.890971  726310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:40:46.391574  726310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:40:46.891388  726310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:40:47.391314  726310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:40:47.461676  726310 kubeadm.go:1113] duration metric: took 4.663634962s to wait for elevateKubeSystemPrivileges
	I1017 19:40:47.461735  726310 kubeadm.go:402] duration metric: took 15.588798913s to StartCluster
	I1017 19:40:47.461763  726310 settings.go:142] acquiring lock: {Name:mkb8ebc6edbbb6915dd74086f502bcc2721555a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:40:47.461847  726310 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:40:47.463529  726310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/kubeconfig: {Name:mkc99c1a086f83f30612e2820a6063c20b9217b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:40:47.463805  726310 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 19:40:47.463831  726310 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:40:47.463891  726310 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 19:40:47.463987  726310 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-599709"
	I1017 19:40:47.464010  726310 addons.go:69] Setting default-storageclass=true in profile "embed-certs-599709"
	I1017 19:40:47.464018  726310 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-599709"
	I1017 19:40:47.464035  726310 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-599709"
	I1017 19:40:47.464051  726310 config.go:182] Loaded profile config "embed-certs-599709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:40:47.464060  726310 host.go:66] Checking if "embed-certs-599709" exists ...
	I1017 19:40:47.464428  726310 cli_runner.go:164] Run: docker container inspect embed-certs-599709 --format={{.State.Status}}
	I1017 19:40:47.464622  726310 cli_runner.go:164] Run: docker container inspect embed-certs-599709 --format={{.State.Status}}
	I1017 19:40:47.465541  726310 out.go:179] * Verifying Kubernetes components...
	I1017 19:40:47.467068  726310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:40:47.488572  726310 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 19:40:47.489526  726310 addons.go:238] Setting addon default-storageclass=true in "embed-certs-599709"
	I1017 19:40:47.489577  726310 host.go:66] Checking if "embed-certs-599709" exists ...
	I1017 19:40:47.489909  726310 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:40:47.489932  726310 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 19:40:47.489999  726310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-599709
	I1017 19:40:47.490123  726310 cli_runner.go:164] Run: docker container inspect embed-certs-599709 --format={{.State.Status}}
	I1017 19:40:47.516540  726310 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 19:40:47.516534  726310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/embed-certs-599709/id_rsa Username:docker}
	I1017 19:40:47.516571  726310 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 19:40:47.516708  726310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-599709
	I1017 19:40:47.541170  726310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/embed-certs-599709/id_rsa Username:docker}
	I1017 19:40:47.569457  726310 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 19:40:47.625901  726310 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:40:47.651587  726310 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:40:47.677071  726310 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 19:40:47.800429  726310 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1017 19:40:47.803178  726310 node_ready.go:35] waiting up to 6m0s for node "embed-certs-599709" to be "Ready" ...
	I1017 19:40:48.029991  726310 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1017 19:40:46.138877  731035 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1017 19:40:46.138902  731035 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1017 19:40:46.138966  731035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907112
	I1017 19:40:46.139173  731035 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:40:46.139186  731035 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 19:40:46.139224  731035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907112
	I1017 19:40:46.139527  731035 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-907112"
	W1017 19:40:46.139553  731035 addons.go:247] addon default-storageclass should already be in state true
	I1017 19:40:46.139586  731035 host.go:66] Checking if "old-k8s-version-907112" exists ...
	I1017 19:40:46.140088  731035 cli_runner.go:164] Run: docker container inspect old-k8s-version-907112 --format={{.State.Status}}
	I1017 19:40:46.170114  731035 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 19:40:46.170165  731035 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 19:40:46.170250  731035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907112
	I1017 19:40:46.182577  731035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/old-k8s-version-907112/id_rsa Username:docker}
	I1017 19:40:46.186867  731035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/old-k8s-version-907112/id_rsa Username:docker}
	I1017 19:40:46.200355  731035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/old-k8s-version-907112/id_rsa Username:docker}
	I1017 19:40:46.279657  731035 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:40:46.295536  731035 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-907112" to be "Ready" ...
	I1017 19:40:46.305730  731035 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:40:46.322939  731035 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1017 19:40:46.322974  731035 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1017 19:40:46.328660  731035 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 19:40:46.352485  731035 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1017 19:40:46.352517  731035 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1017 19:40:46.381176  731035 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1017 19:40:46.381207  731035 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1017 19:40:46.411974  731035 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1017 19:40:46.412015  731035 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1017 19:40:46.435035  731035 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1017 19:40:46.435063  731035 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1017 19:40:46.455989  731035 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1017 19:40:46.456035  731035 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1017 19:40:46.476361  731035 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1017 19:40:46.476392  731035 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1017 19:40:46.494904  731035 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1017 19:40:46.494948  731035 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1017 19:40:46.509811  731035 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 19:40:46.509848  731035 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1017 19:40:46.525136  731035 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Oct 17 19:40:38 no-preload-171807 crio[769]: time="2025-10-17T19:40:38.189557864Z" level=info msg="Starting container: 3847c6537fd6ac0c8bcb17ca6d7a3a3453ce63c33d4299c2f2728cabba89da9a" id=5355cfe5-ed3b-46b1-a00c-f38470cf8b51 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:40:38 no-preload-171807 crio[769]: time="2025-10-17T19:40:38.191934008Z" level=info msg="Started container" PID=2903 containerID=3847c6537fd6ac0c8bcb17ca6d7a3a3453ce63c33d4299c2f2728cabba89da9a description=kube-system/coredns-66bc5c9577-gnx5k/coredns id=5355cfe5-ed3b-46b1-a00c-f38470cf8b51 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b97aed97a607a74e19ef67eca1d2d3879f1f6038217ac2882ca4dce83141c663
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.038957341Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4d874ce5-8560-4961-9263-a372e1417da5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.039088457Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.045163602Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:22a2c93f291bf389173ac876157c568ab968e596fbe1d4f73dc162067a6fd18a UID:22292e6f-a57f-4f4c-baa0-b41b8ee6e47b NetNS:/var/run/netns/21ce0988-9883-4b9a-aef2-21b0bae9defc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005726b8}] Aliases:map[]}"
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.045205894Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.056051131Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:22a2c93f291bf389173ac876157c568ab968e596fbe1d4f73dc162067a6fd18a UID:22292e6f-a57f-4f4c-baa0-b41b8ee6e47b NetNS:/var/run/netns/21ce0988-9883-4b9a-aef2-21b0bae9defc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005726b8}] Aliases:map[]}"
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.056210427Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.057049788Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.05793992Z" level=info msg="Ran pod sandbox 22a2c93f291bf389173ac876157c568ab968e596fbe1d4f73dc162067a6fd18a with infra container: default/busybox/POD" id=4d874ce5-8560-4961-9263-a372e1417da5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.059101513Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9ac749b3-d84d-43f5-8e8e-350712c0d0af name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.059257665Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9ac749b3-d84d-43f5-8e8e-350712c0d0af name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.05930714Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=9ac749b3-d84d-43f5-8e8e-350712c0d0af name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.059864059Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=eee8673a-91f5-4181-b155-d33a75826972 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.061522937Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.903853435Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=eee8673a-91f5-4181-b155-d33a75826972 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.904666193Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b8475def-d0ca-4cb2-8273-a99da70b6902 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.906237956Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a6e4b73d-be48-4df9-be4b-0f89dea8bd3f name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.9145874Z" level=info msg="Creating container: default/busybox/busybox" id=23218def-0e69-47b7-aba8-ad5afb956499 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.918676342Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.925759659Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.926646238Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.954240129Z" level=info msg="Created container 7ec677e7163ddc96de7f7a3690c8fe88338942e6d31ca25d78ef639e6e2481ec: default/busybox/busybox" id=23218def-0e69-47b7-aba8-ad5afb956499 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.954959944Z" level=info msg="Starting container: 7ec677e7163ddc96de7f7a3690c8fe88338942e6d31ca25d78ef639e6e2481ec" id=d6967b2c-477e-49ea-81dc-4b9cf31cba31 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:40:41 no-preload-171807 crio[769]: time="2025-10-17T19:40:41.956851347Z" level=info msg="Started container" PID=2975 containerID=7ec677e7163ddc96de7f7a3690c8fe88338942e6d31ca25d78ef639e6e2481ec description=default/busybox/busybox id=d6967b2c-477e-49ea-81dc-4b9cf31cba31 name=/runtime.v1.RuntimeService/StartContainer sandboxID=22a2c93f291bf389173ac876157c568ab968e596fbe1d4f73dc162067a6fd18a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	7ec677e7163dd       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   22a2c93f291bf       busybox                                     default
	3847c6537fd6a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   b97aed97a607a       coredns-66bc5c9577-gnx5k                    kube-system
	f19c2131b48ef       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   d4d8ddb95b57c       storage-provisioner                         kube-system
	cdf16b5ffad9b       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   13298c8375459       kindnet-tk5hv                               kube-system
	3839119b88428       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      26 seconds ago      Running             kube-proxy                0                   53c2011c21536       kube-proxy-cdbjg                            kube-system
	f81af9629811a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      37 seconds ago      Running             kube-scheduler            0                   f7ab69c46ce35       kube-scheduler-no-preload-171807            kube-system
	85a84bf22e996       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      37 seconds ago      Running             kube-controller-manager   0                   e1a8b40f73cd8       kube-controller-manager-no-preload-171807   kube-system
	f0c2851f586ae       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      37 seconds ago      Running             kube-apiserver            0                   fa3fb2454b943       kube-apiserver-no-preload-171807            kube-system
	454adbcaad153       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      37 seconds ago      Running             etcd                      0                   bd0baa1d36f5e       etcd-no-preload-171807                      kube-system
	
	
	==> coredns [3847c6537fd6ac0c8bcb17ca6d7a3a3453ce63c33d4299c2f2728cabba89da9a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52645 - 29690 "HINFO IN 2324298910417831701.7214286643558372032. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.475521042s
	
	
	==> describe nodes <==
	Name:               no-preload-171807
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-171807
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=no-preload-171807
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_40_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:40:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-171807
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:40:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:40:48 +0000   Fri, 17 Oct 2025 19:40:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:40:48 +0000   Fri, 17 Oct 2025 19:40:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:40:48 +0000   Fri, 17 Oct 2025 19:40:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:40:48 +0000   Fri, 17 Oct 2025 19:40:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-171807
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                4a402992-3a00-457b-a9c9-3f38efedf1af
	  Boot ID:                    c8616e78-d085-41cd-a329-f2bcfd9cfa15
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-gnx5k                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-no-preload-171807                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-tk5hv                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-171807             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-171807    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-cdbjg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-171807             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s   kubelet          Node no-preload-171807 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet          Node no-preload-171807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet          Node no-preload-171807 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node no-preload-171807 event: Registered Node no-preload-171807 in Controller
	  Normal  NodeReady                13s   kubelet          Node no-preload-171807 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 d1 49 91 03 c2 08 06
	[  +0.000804] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 a9 2b 44 da ae 08 06
	[Oct17 18:59] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022229] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023876] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.024898] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +2.047801] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +4.031525] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:00] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +16.382262] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +32.252567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	
	
	==> etcd [454adbcaad153c0d77cdd57d160721c88a400cde02a2da6bfa851f3370285b48] <==
	{"level":"warn","ts":"2025-10-17T19:40:14.461024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:14.467741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:14.475792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:14.487802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:14.494386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:14.501762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:14.509538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:14.517785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:14.532825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:14.539472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:14.547927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:14.564015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:14.571547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:14.585065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:14.593031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:14.599909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:14.651668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45380","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T19:40:25.404407Z","caller":"traceutil/trace.go:172","msg":"trace[581030662] linearizableReadLoop","detail":"{readStateIndex:389; appliedIndex:389; }","duration":"105.1814ms","start":"2025-10-17T19:40:25.299193Z","end":"2025-10-17T19:40:25.404375Z","steps":["trace[581030662] 'read index received'  (duration: 105.169875ms)","trace[581030662] 'applied index is now lower than readState.Index'  (duration: 9.409µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T19:40:25.404537Z","caller":"traceutil/trace.go:172","msg":"trace[1717562292] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"107.803777ms","start":"2025-10-17T19:40:25.296721Z","end":"2025-10-17T19:40:25.404525Z","steps":["trace[1717562292] 'process raft request'  (duration: 107.691664ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:40:25.404624Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.409177ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-171807\" limit:1 ","response":"range_response_count:1 size:4502"}
	{"level":"info","ts":"2025-10-17T19:40:25.404713Z","caller":"traceutil/trace.go:172","msg":"trace[1443438560] range","detail":"{range_begin:/registry/minions/no-preload-171807; range_end:; response_count:1; response_revision:377; }","duration":"105.517336ms","start":"2025-10-17T19:40:25.299183Z","end":"2025-10-17T19:40:25.404700Z","steps":["trace[1443438560] 'agreement among raft nodes before linearized reading'  (duration: 105.289404ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:40:25.918027Z","caller":"traceutil/trace.go:172","msg":"trace[1902538174] linearizableReadLoop","detail":"{readStateIndex:391; appliedIndex:391; }","duration":"119.147557ms","start":"2025-10-17T19:40:25.798848Z","end":"2025-10-17T19:40:25.917995Z","steps":["trace[1902538174] 'read index received'  (duration: 119.13941ms)","trace[1902538174] 'applied index is now lower than readState.Index'  (duration: 6.999µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T19:40:25.918276Z","caller":"traceutil/trace.go:172","msg":"trace[787441876] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"141.349878ms","start":"2025-10-17T19:40:25.776910Z","end":"2025-10-17T19:40:25.918260Z","steps":["trace[787441876] 'process raft request'  (duration: 141.200659ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:40:25.918276Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.395168ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-171807\" limit:1 ","response":"range_response_count:1 size:4502"}
	{"level":"info","ts":"2025-10-17T19:40:25.918333Z","caller":"traceutil/trace.go:172","msg":"trace[87604222] range","detail":"{range_begin:/registry/minions/no-preload-171807; range_end:; response_count:1; response_revision:379; }","duration":"119.483476ms","start":"2025-10-17T19:40:25.798839Z","end":"2025-10-17T19:40:25.918322Z","steps":["trace[87604222] 'agreement among raft nodes before linearized reading'  (duration: 119.257127ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:40:50 up  3:23,  0 user,  load average: 4.08, 3.29, 2.02
	Linux no-preload-171807 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cdf16b5ffad9b7bd65844437d73834be10d4b44560fa8f8c54159bb53cf8026b] <==
	I1017 19:40:27.175502       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 19:40:27.175822       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1017 19:40:27.176007       1 main.go:148] setting mtu 1500 for CNI 
	I1017 19:40:27.176025       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 19:40:27.176051       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T19:40:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 19:40:27.473761       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 19:40:27.473814       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 19:40:27.473830       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:40:27.474075       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 19:40:27.874324       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:40:27.874349       1 metrics.go:72] Registering metrics
	I1017 19:40:27.874428       1 controller.go:711] "Syncing nftables rules"
	I1017 19:40:37.474401       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 19:40:37.474459       1 main.go:301] handling current node
	I1017 19:40:47.476850       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 19:40:47.476893       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f0c2851f586aee1852780f7e7756a1fe84b30c514e307d5decc7f0a18343de15] <==
	E1017 19:40:15.226845       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1017 19:40:15.274601       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 19:40:15.276971       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:40:15.277119       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1017 19:40:15.282400       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:40:15.282584       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 19:40:15.376841       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 19:40:16.077660       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 19:40:16.083572       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 19:40:16.083597       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:40:16.674126       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 19:40:16.720310       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 19:40:16.781467       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 19:40:16.788582       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1017 19:40:16.789997       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:40:16.794997       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 19:40:17.104778       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 19:40:17.871043       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 19:40:17.883602       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 19:40:17.893337       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 19:40:22.759027       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:40:22.764582       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:40:23.058299       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 19:40:23.206973       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1017 19:40:48.877398       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:57754: use of closed network connection
	
	
	==> kube-controller-manager [85a84bf22e996d50102dcd990445de8c13097d07c348096fb95cc28dcd3af478] <==
	I1017 19:40:22.103700       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 19:40:22.104848       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 19:40:22.104860       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 19:40:22.104900       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 19:40:22.104918       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 19:40:22.104971       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 19:40:22.104994       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 19:40:22.105026       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 19:40:22.105083       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1017 19:40:22.105440       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 19:40:22.105508       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-171807"
	I1017 19:40:22.105531       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 19:40:22.105520       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 19:40:22.105568       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1017 19:40:22.105671       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1017 19:40:22.106905       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 19:40:22.107015       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1017 19:40:22.107025       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 19:40:22.108084       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 19:40:22.109554       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 19:40:22.109554       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:40:22.109637       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 19:40:22.120996       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 19:40:22.129255       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:40:42.108387       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3839119b884284da92de76861c1c7dd9e692320315c7c9c0461cdab66ddf2436] <==
	I1017 19:40:23.733148       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:40:23.817040       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:40:23.918053       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:40:23.918116       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1017 19:40:23.918230       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:40:23.944427       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:40:23.944493       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:40:23.949834       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:40:23.950282       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:40:23.950323       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:40:23.951829       1 config.go:200] "Starting service config controller"
	I1017 19:40:23.951847       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:40:23.951923       1 config.go:309] "Starting node config controller"
	I1017 19:40:23.952421       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:40:23.952473       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:40:23.952386       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:40:23.952541       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:40:23.952374       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:40:23.954098       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:40:24.052041       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:40:24.053250       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:40:24.054411       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f81af9629811addb5f791f64dd42dd8ba44032615c0052651e56a84fb36fc92c] <==
	E1017 19:40:15.140810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:40:15.140844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 19:40:15.140844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:40:15.140903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 19:40:15.140902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 19:40:15.140959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 19:40:15.142403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 19:40:15.988382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:40:15.999912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 19:40:16.079665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:40:16.093885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:40:16.111224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 19:40:16.122460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:40:16.143263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:40:16.218677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 19:40:16.234134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:40:16.272854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 19:40:16.378813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:40:16.389346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 19:40:16.406830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:40:16.453407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 19:40:16.457916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:40:16.475266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 19:40:16.646031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1017 19:40:19.837439       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:40:18 no-preload-171807 kubelet[2302]: E1017 19:40:18.755778    2302 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-no-preload-171807\" already exists" pod="kube-system/kube-scheduler-no-preload-171807"
	Oct 17 19:40:18 no-preload-171807 kubelet[2302]: I1017 19:40:18.766750    2302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-171807" podStartSLOduration=1.766725857 podStartE2EDuration="1.766725857s" podCreationTimestamp="2025-10-17 19:40:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:40:18.755574402 +0000 UTC m=+1.129190139" watchObservedRunningTime="2025-10-17 19:40:18.766725857 +0000 UTC m=+1.140341592"
	Oct 17 19:40:18 no-preload-171807 kubelet[2302]: I1017 19:40:18.780458    2302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-171807" podStartSLOduration=1.780431889 podStartE2EDuration="1.780431889s" podCreationTimestamp="2025-10-17 19:40:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:40:18.766954537 +0000 UTC m=+1.140570256" watchObservedRunningTime="2025-10-17 19:40:18.780431889 +0000 UTC m=+1.154047627"
	Oct 17 19:40:18 no-preload-171807 kubelet[2302]: I1017 19:40:18.791017    2302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-171807" podStartSLOduration=1.7910017 podStartE2EDuration="1.7910017s" podCreationTimestamp="2025-10-17 19:40:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:40:18.780964383 +0000 UTC m=+1.154580118" watchObservedRunningTime="2025-10-17 19:40:18.7910017 +0000 UTC m=+1.164617482"
	Oct 17 19:40:18 no-preload-171807 kubelet[2302]: I1017 19:40:18.803535    2302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-171807" podStartSLOduration=1.803513288 podStartE2EDuration="1.803513288s" podCreationTimestamp="2025-10-17 19:40:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:40:18.790839251 +0000 UTC m=+1.164454988" watchObservedRunningTime="2025-10-17 19:40:18.803513288 +0000 UTC m=+1.177129034"
	Oct 17 19:40:22 no-preload-171807 kubelet[2302]: I1017 19:40:22.085740    2302 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 17 19:40:22 no-preload-171807 kubelet[2302]: I1017 19:40:22.086436    2302 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 17 19:40:23 no-preload-171807 kubelet[2302]: I1017 19:40:23.245847    2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f638a0d8-f3ed-4ab9-89c8-b68a756e51e9-xtables-lock\") pod \"kube-proxy-cdbjg\" (UID: \"f638a0d8-f3ed-4ab9-89c8-b68a756e51e9\") " pod="kube-system/kube-proxy-cdbjg"
	Oct 17 19:40:23 no-preload-171807 kubelet[2302]: I1017 19:40:23.245908    2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntdj7\" (UniqueName: \"kubernetes.io/projected/f638a0d8-f3ed-4ab9-89c8-b68a756e51e9-kube-api-access-ntdj7\") pod \"kube-proxy-cdbjg\" (UID: \"f638a0d8-f3ed-4ab9-89c8-b68a756e51e9\") " pod="kube-system/kube-proxy-cdbjg"
	Oct 17 19:40:23 no-preload-171807 kubelet[2302]: I1017 19:40:23.245941    2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f638a0d8-f3ed-4ab9-89c8-b68a756e51e9-kube-proxy\") pod \"kube-proxy-cdbjg\" (UID: \"f638a0d8-f3ed-4ab9-89c8-b68a756e51e9\") " pod="kube-system/kube-proxy-cdbjg"
	Oct 17 19:40:23 no-preload-171807 kubelet[2302]: I1017 19:40:23.245963    2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f638a0d8-f3ed-4ab9-89c8-b68a756e51e9-lib-modules\") pod \"kube-proxy-cdbjg\" (UID: \"f638a0d8-f3ed-4ab9-89c8-b68a756e51e9\") " pod="kube-system/kube-proxy-cdbjg"
	Oct 17 19:40:23 no-preload-171807 kubelet[2302]: I1017 19:40:23.346584    2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf2lx\" (UniqueName: \"kubernetes.io/projected/06dcd1bc-85e2-4dd3-aa6c-8869c5bdcc7f-kube-api-access-qf2lx\") pod \"kindnet-tk5hv\" (UID: \"06dcd1bc-85e2-4dd3-aa6c-8869c5bdcc7f\") " pod="kube-system/kindnet-tk5hv"
	Oct 17 19:40:23 no-preload-171807 kubelet[2302]: I1017 19:40:23.346633    2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06dcd1bc-85e2-4dd3-aa6c-8869c5bdcc7f-lib-modules\") pod \"kindnet-tk5hv\" (UID: \"06dcd1bc-85e2-4dd3-aa6c-8869c5bdcc7f\") " pod="kube-system/kindnet-tk5hv"
	Oct 17 19:40:23 no-preload-171807 kubelet[2302]: I1017 19:40:23.346731    2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/06dcd1bc-85e2-4dd3-aa6c-8869c5bdcc7f-cni-cfg\") pod \"kindnet-tk5hv\" (UID: \"06dcd1bc-85e2-4dd3-aa6c-8869c5bdcc7f\") " pod="kube-system/kindnet-tk5hv"
	Oct 17 19:40:23 no-preload-171807 kubelet[2302]: I1017 19:40:23.346790    2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06dcd1bc-85e2-4dd3-aa6c-8869c5bdcc7f-xtables-lock\") pod \"kindnet-tk5hv\" (UID: \"06dcd1bc-85e2-4dd3-aa6c-8869c5bdcc7f\") " pod="kube-system/kindnet-tk5hv"
	Oct 17 19:40:25 no-preload-171807 kubelet[2302]: I1017 19:40:25.406357    2302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cdbjg" podStartSLOduration=2.406336815 podStartE2EDuration="2.406336815s" podCreationTimestamp="2025-10-17 19:40:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:40:23.785097688 +0000 UTC m=+6.158713424" watchObservedRunningTime="2025-10-17 19:40:25.406336815 +0000 UTC m=+7.779952548"
	Oct 17 19:40:27 no-preload-171807 kubelet[2302]: I1017 19:40:27.806082    2302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-tk5hv" podStartSLOduration=1.398605377 podStartE2EDuration="4.806031531s" podCreationTimestamp="2025-10-17 19:40:23 +0000 UTC" firstStartedPulling="2025-10-17 19:40:23.556109736 +0000 UTC m=+5.929725464" lastFinishedPulling="2025-10-17 19:40:26.963535902 +0000 UTC m=+9.337151618" observedRunningTime="2025-10-17 19:40:27.805397152 +0000 UTC m=+10.179012890" watchObservedRunningTime="2025-10-17 19:40:27.806031531 +0000 UTC m=+10.179647268"
	Oct 17 19:40:37 no-preload-171807 kubelet[2302]: I1017 19:40:37.762232    2302 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 17 19:40:37 no-preload-171807 kubelet[2302]: I1017 19:40:37.855127    2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4654\" (UniqueName: \"kubernetes.io/projected/5cc39277-706b-4f4e-87c5-1af53966018f-kube-api-access-k4654\") pod \"coredns-66bc5c9577-gnx5k\" (UID: \"5cc39277-706b-4f4e-87c5-1af53966018f\") " pod="kube-system/coredns-66bc5c9577-gnx5k"
	Oct 17 19:40:37 no-preload-171807 kubelet[2302]: I1017 19:40:37.855183    2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5cc39277-706b-4f4e-87c5-1af53966018f-config-volume\") pod \"coredns-66bc5c9577-gnx5k\" (UID: \"5cc39277-706b-4f4e-87c5-1af53966018f\") " pod="kube-system/coredns-66bc5c9577-gnx5k"
	Oct 17 19:40:37 no-preload-171807 kubelet[2302]: I1017 19:40:37.855215    2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/72f77177-3dcc-471b-a2a5-baaa7a566bc9-tmp\") pod \"storage-provisioner\" (UID: \"72f77177-3dcc-471b-a2a5-baaa7a566bc9\") " pod="kube-system/storage-provisioner"
	Oct 17 19:40:37 no-preload-171807 kubelet[2302]: I1017 19:40:37.855236    2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhbj9\" (UniqueName: \"kubernetes.io/projected/72f77177-3dcc-471b-a2a5-baaa7a566bc9-kube-api-access-nhbj9\") pod \"storage-provisioner\" (UID: \"72f77177-3dcc-471b-a2a5-baaa7a566bc9\") " pod="kube-system/storage-provisioner"
	Oct 17 19:40:38 no-preload-171807 kubelet[2302]: I1017 19:40:38.822179    2302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.822153304 podStartE2EDuration="15.822153304s" podCreationTimestamp="2025-10-17 19:40:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:40:38.821619041 +0000 UTC m=+21.195234790" watchObservedRunningTime="2025-10-17 19:40:38.822153304 +0000 UTC m=+21.195769041"
	Oct 17 19:40:38 no-preload-171807 kubelet[2302]: I1017 19:40:38.844843    2302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gnx5k" podStartSLOduration=15.844816312 podStartE2EDuration="15.844816312s" podCreationTimestamp="2025-10-17 19:40:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:40:38.840083325 +0000 UTC m=+21.213699060" watchObservedRunningTime="2025-10-17 19:40:38.844816312 +0000 UTC m=+21.218432049"
	Oct 17 19:40:40 no-preload-171807 kubelet[2302]: I1017 19:40:40.779093    2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g2jj\" (UniqueName: \"kubernetes.io/projected/22292e6f-a57f-4f4c-baa0-b41b8ee6e47b-kube-api-access-5g2jj\") pod \"busybox\" (UID: \"22292e6f-a57f-4f4c-baa0-b41b8ee6e47b\") " pod="default/busybox"
	
	
	==> storage-provisioner [f19c2131b48ef3548ff6a8e9f79b94036daaa9e9dbc842461650377d66a8a055] <==
	I1017 19:40:38.201082       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 19:40:38.216577       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 19:40:38.216794       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 19:40:38.221556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:40:38.228732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 19:40:38.228938       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 19:40:38.229156       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-171807_ed383402-7496-45cd-9497-ee6bdb899de5!
	I1017 19:40:38.230190       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e74a4cee-e08d-4268-aaf1-9d923d1555d4", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-171807_ed383402-7496-45cd-9497-ee6bdb899de5 became leader
	W1017 19:40:38.235839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:40:38.244536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 19:40:38.330921       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-171807_ed383402-7496-45cd-9497-ee6bdb899de5!
	W1017 19:40:40.247785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:40:40.252321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:40:42.255626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:40:42.260127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:40:44.263996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:40:44.268397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:40:46.272675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:40:46.278420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:40:48.282560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:40:48.287327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:40:50.290730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:40:50.295411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-171807 -n no-preload-171807
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-171807 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-599709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-599709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (270.742736ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:41:10Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-599709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-599709 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-599709 describe deploy/metrics-server -n kube-system: exit status 1 (67.947301ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-599709 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-599709
helpers_test.go:243: (dbg) docker inspect embed-certs-599709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "65267e6fd2cc43de60ecf8ea56b9d12897767c145eb0285daf0bf3755eb93590",
	        "Created": "2025-10-17T19:40:26.431376563Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 728368,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:40:26.761273074Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/65267e6fd2cc43de60ecf8ea56b9d12897767c145eb0285daf0bf3755eb93590/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/65267e6fd2cc43de60ecf8ea56b9d12897767c145eb0285daf0bf3755eb93590/hostname",
	        "HostsPath": "/var/lib/docker/containers/65267e6fd2cc43de60ecf8ea56b9d12897767c145eb0285daf0bf3755eb93590/hosts",
	        "LogPath": "/var/lib/docker/containers/65267e6fd2cc43de60ecf8ea56b9d12897767c145eb0285daf0bf3755eb93590/65267e6fd2cc43de60ecf8ea56b9d12897767c145eb0285daf0bf3755eb93590-json.log",
	        "Name": "/embed-certs-599709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-599709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-599709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "65267e6fd2cc43de60ecf8ea56b9d12897767c145eb0285daf0bf3755eb93590",
	                "LowerDir": "/var/lib/docker/overlay2/dd68b6694848505f3dbe0f1fc8175ad12403dd916c34260859d346ccfc6326c8-init/diff:/var/lib/docker/overlay2/dbfb6a42e05d15debefb7c829b0dbabbe558b70da40f1ab4f30d27e0dda96088/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dd68b6694848505f3dbe0f1fc8175ad12403dd916c34260859d346ccfc6326c8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dd68b6694848505f3dbe0f1fc8175ad12403dd916c34260859d346ccfc6326c8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dd68b6694848505f3dbe0f1fc8175ad12403dd916c34260859d346ccfc6326c8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-599709",
	                "Source": "/var/lib/docker/volumes/embed-certs-599709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-599709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-599709",
	                "name.minikube.sigs.k8s.io": "embed-certs-599709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2041cd0b192a558b44887c937511cf57204f0fd5a46579bfa1475356dfa5ca6e",
	            "SandboxKey": "/var/run/docker/netns/2041cd0b192a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-599709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:4e:9f:b6:3b:3f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "116cc729b1af4d4ec359cb40c0efa07f90c3ee85e9adaa14764bb2ee64de2228",
	                    "EndpointID": "e021629c00abdb57f8e8bbf4614a93a1d841cda792dd28d5fd1e9ac23bdddcec",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-599709",
	                        "65267e6fd2cc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-599709 -n embed-certs-599709
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-599709 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-599709 logs -n 25: (1.020988107s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-448344 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo containerd config dump                                                                                                                                                                                                  │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo crio config                                                                                                                                                                                                             │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ delete  │ -p cilium-448344                                                                                                                                                                                                                              │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:39 UTC │
	│ start   │ -p old-k8s-version-907112 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:40 UTC │
	│ start   │ -p pause-022753 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-022753           │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:39 UTC │
	│ pause   │ -p pause-022753 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-022753           │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ delete  │ -p pause-022753                                                                                                                                                                                                                               │ pause-022753           │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:39 UTC │
	│ start   │ -p no-preload-171807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-171807      │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:40 UTC │
	│ start   │ -p cert-expiration-141205 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-141205 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ delete  │ -p cert-expiration-141205                                                                                                                                                                                                                     │ cert-expiration-141205 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-907112 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │                     │
	│ start   │ -p embed-certs-599709 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-599709     │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:41 UTC │
	│ stop    │ -p old-k8s-version-907112 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-907112 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ start   │ -p old-k8s-version-907112 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-171807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-171807      │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │                     │
	│ stop    │ -p no-preload-171807 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-171807      │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p no-preload-171807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-171807      │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-171807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-171807      │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-599709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-599709     │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:41:08
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:41:08.250416  736846 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:41:08.250809  736846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:41:08.250824  736846 out.go:374] Setting ErrFile to fd 2...
	I1017 19:41:08.250831  736846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:41:08.251202  736846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:41:08.251929  736846 out.go:368] Setting JSON to false
	I1017 19:41:08.253790  736846 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12207,"bootTime":1760717861,"procs":337,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:41:08.253918  736846 start.go:141] virtualization: kvm guest
	I1017 19:41:08.256217  736846 out.go:179] * [no-preload-171807] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:41:08.258221  736846 notify.go:220] Checking for updates...
	I1017 19:41:08.258239  736846 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:41:08.259800  736846 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:41:08.261309  736846 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:41:08.262863  736846 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 19:41:08.264106  736846 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:41:08.265354  736846 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:41:08.267230  736846 config.go:182] Loaded profile config "no-preload-171807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:41:08.267958  736846 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:41:08.298441  736846 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:41:08.298567  736846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:41:08.373286  736846 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 19:41:08.35983923 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:41:08.373430  736846 docker.go:318] overlay module found
	I1017 19:41:08.375555  736846 out.go:179] * Using the docker driver based on existing profile
	I1017 19:41:08.376888  736846 start.go:305] selected driver: docker
	I1017 19:41:08.376908  736846 start.go:925] validating driver "docker" against &{Name:no-preload-171807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-171807 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:41:08.377048  736846 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:41:08.377850  736846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:41:08.454374  736846 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 19:41:08.441737116 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:41:08.454800  736846 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:41:08.454846  736846 cni.go:84] Creating CNI manager for ""
	I1017 19:41:08.454912  736846 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:41:08.454955  736846 start.go:349] cluster config:
	{Name:no-preload-171807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-171807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:41:08.458269  736846 out.go:179] * Starting "no-preload-171807" primary control-plane node in "no-preload-171807" cluster
	I1017 19:41:08.459651  736846 cache.go:123] Beginning downloading kic base image for docker with crio
	W1017 19:41:03.537923  731035 pod_ready.go:104] pod "coredns-5dd5756b68-gnqx4" is not "Ready", error: <nil>
	W1017 19:41:06.037860  731035 pod_ready.go:104] pod "coredns-5dd5756b68-gnqx4" is not "Ready", error: <nil>
	W1017 19:41:08.038276  731035 pod_ready.go:104] pod "coredns-5dd5756b68-gnqx4" is not "Ready", error: <nil>
	I1017 19:41:08.461646  736846 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:41:08.463219  736846 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:41:08.463303  736846 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:41:08.463449  736846 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/no-preload-171807/config.json ...
	I1017 19:41:08.463529  736846 cache.go:107] acquiring lock: {Name:mk6429e0591761799c72c8921fc2155eb46a3270 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:41:08.463637  736846 cache.go:107] acquiring lock: {Name:mk92aa17be2656ac87009f9fa334223ef8ebcfea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:41:08.463704  736846 cache.go:107] acquiring lock: {Name:mk0fd8bd9ba3e77fc05d126ca5d1f12a291762d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:41:08.463781  736846 cache.go:115] /home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1017 19:41:08.463756  736846 cache.go:107] acquiring lock: {Name:mk3c12e89f7bc1b9109831b940083a0fd3d70b13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:41:08.463704  736846 cache.go:115] /home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1017 19:41:08.463824  736846 cache.go:115] /home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1017 19:41:08.463722  736846 cache.go:107] acquiring lock: {Name:mk7c4610ae23c44c707c353078292755842d3e77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:41:08.463835  736846 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 80.848µs
	I1017 19:41:08.463840  736846 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 331.519µs
	I1017 19:41:08.463854  736846 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1017 19:41:08.463834  736846 cache.go:107] acquiring lock: {Name:mkba37cb66af599609629c4d9e6af9791455a46e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:41:08.463860  736846 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1017 19:41:08.463894  736846 cache.go:115] /home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1017 19:41:08.463903  736846 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 72.908µs
	I1017 19:41:08.463918  736846 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1017 19:41:08.463800  736846 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 182.75µs
	I1017 19:41:08.463928  736846 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1017 19:41:08.463824  736846 cache.go:115] /home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1017 19:41:08.463939  736846 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 304.665µs
	I1017 19:41:08.463948  736846 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1017 19:41:08.463853  736846 cache.go:115] /home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1017 19:41:08.463957  736846 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 238.747µs
	I1017 19:41:08.463964  736846 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1017 19:41:08.463529  736846 cache.go:107] acquiring lock: {Name:mk1556e93eb22a0681fc334bfa17252b93b617c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:41:08.463992  736846 cache.go:115] /home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1017 19:41:08.463995  736846 cache.go:107] acquiring lock: {Name:mkccd4461487d941ea3f95c428a46d6f0a7ed4b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:41:08.464061  736846 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 488.597µs
	I1017 19:41:08.464086  736846 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1017 19:41:08.464099  736846 cache.go:115] /home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1017 19:41:08.464127  736846 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 607.548µs
	I1017 19:41:08.464140  736846 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21753-492109/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1017 19:41:08.464157  736846 cache.go:87] Successfully saved all images to host disk.
	I1017 19:41:08.490472  736846 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:41:08.490498  736846 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:41:08.490517  736846 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:41:08.490551  736846 start.go:360] acquireMachinesLock for no-preload-171807: {Name:mk9b9dfc17e86cb22e09ccbacfcc1657e533c4ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:41:08.490712  736846 start.go:364] duration metric: took 108.013µs to acquireMachinesLock for "no-preload-171807"
	I1017 19:41:08.490747  736846 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:41:08.490759  736846 fix.go:54] fixHost starting: 
	I1017 19:41:08.491119  736846 cli_runner.go:164] Run: docker container inspect no-preload-171807 --format={{.State.Status}}
	I1017 19:41:08.514313  736846 fix.go:112] recreateIfNeeded on no-preload-171807: state=Stopped err=<nil>
	W1017 19:41:08.514377  736846 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 17 19:40:58 embed-certs-599709 crio[782]: time="2025-10-17T19:40:58.893224311Z" level=info msg="Starting container: 5a80891d293bc2207f274733a58473077ee2356f5b9ab4f7e475ed34c3362583" id=e121046a-03f9-47f9-8374-032ba5ff5085 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:40:58 embed-certs-599709 crio[782]: time="2025-10-17T19:40:58.895601187Z" level=info msg="Started container" PID=1854 containerID=5a80891d293bc2207f274733a58473077ee2356f5b9ab4f7e475ed34c3362583 description=kube-system/coredns-66bc5c9577-v8hls/coredns id=e121046a-03f9-47f9-8374-032ba5ff5085 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed8399d1f508964593ecfa2972a566113471729bfc8103b9593a5620be9409a6
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.091172916Z" level=info msg="Running pod sandbox: default/busybox/POD" id=74e2f9dd-18f0-409b-931e-9d7fe693c53a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.091295499Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.097972864Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:42e03d639e92b8102cb0a8d43f2cb5f2566a8d60767769f197fa2b1bd7404aec UID:970734d3-e268-47f0-9b00-efa6c26f8740 NetNS:/var/run/netns/c8614262-5f60-4c53-b338-a9846ade1309 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005644d0}] Aliases:map[]}"
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.098025268Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.112236431Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:42e03d639e92b8102cb0a8d43f2cb5f2566a8d60767769f197fa2b1bd7404aec UID:970734d3-e268-47f0-9b00-efa6c26f8740 NetNS:/var/run/netns/c8614262-5f60-4c53-b338-a9846ade1309 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005644d0}] Aliases:map[]}"
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.112382771Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.113255366Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.114147323Z" level=info msg="Ran pod sandbox 42e03d639e92b8102cb0a8d43f2cb5f2566a8d60767769f197fa2b1bd7404aec with infra container: default/busybox/POD" id=74e2f9dd-18f0-409b-931e-9d7fe693c53a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.115589281Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a2ac2901-7b0a-47be-b5cc-9684a2521365 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.115767696Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a2ac2901-7b0a-47be-b5cc-9684a2521365 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.115811469Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a2ac2901-7b0a-47be-b5cc-9684a2521365 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.116663805Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=34dcc5d2-3c61-4c2c-b67a-75b894ea7981 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.118861071Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.909904155Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=34dcc5d2-3c61-4c2c-b67a-75b894ea7981 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.910878756Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6806243b-3a72-4483-8f36-17096020dc94 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.912730406Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d59c86cb-2e42-430d-8da6-142f0a15b410 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.917182956Z" level=info msg="Creating container: default/busybox/busybox" id=8ef62dd7-0d8c-4e8a-a801-e4eb0ff90bda name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.918219918Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.923286007Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.923902267Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.959781362Z" level=info msg="Created container 71aa305930765c7af0418beb69254c70c5b013fb6f9d2659e679ebcea73f4081: default/busybox/busybox" id=8ef62dd7-0d8c-4e8a-a801-e4eb0ff90bda name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.96048353Z" level=info msg="Starting container: 71aa305930765c7af0418beb69254c70c5b013fb6f9d2659e679ebcea73f4081" id=8c989652-66d0-4ab5-8d88-13371fe17f8c name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:41:02 embed-certs-599709 crio[782]: time="2025-10-17T19:41:02.962857365Z" level=info msg="Started container" PID=1933 containerID=71aa305930765c7af0418beb69254c70c5b013fb6f9d2659e679ebcea73f4081 description=default/busybox/busybox id=8c989652-66d0-4ab5-8d88-13371fe17f8c name=/runtime.v1.RuntimeService/StartContainer sandboxID=42e03d639e92b8102cb0a8d43f2cb5f2566a8d60767769f197fa2b1bd7404aec
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	71aa305930765       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   42e03d639e92b       busybox                                      default
	5a80891d293bc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   ed8399d1f5089       coredns-66bc5c9577-v8hls                     kube-system
	07f0acd0286f8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   dd84b3555cdb8       storage-provisioner                          kube-system
	5da3848c987f9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   9331f7eebc5af       kube-proxy-l2pwz                             kube-system
	5845dc85fc524       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   2b487ee06756a       kindnet-sj7sj                                kube-system
	8bbd4a1dcccee       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   ec513baeda45d       kube-controller-manager-embed-certs-599709   kube-system
	457413f90d235       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   c1dfa436f422d       kube-apiserver-embed-certs-599709            kube-system
	e0bd07efd86d3       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   ab2907e474649       kube-scheduler-embed-certs-599709            kube-system
	5c7fbefe110f2       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   def253b25d142       etcd-embed-certs-599709                      kube-system
	
	
	==> coredns [5a80891d293bc2207f274733a58473077ee2356f5b9ab4f7e475ed34c3362583] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52493 - 49566 "HINFO IN 8184292557450603747.8752774148948943052. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.079820395s
	
	
	==> describe nodes <==
	Name:               embed-certs-599709
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-599709
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=embed-certs-599709
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_40_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:40:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-599709
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:41:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:40:58 +0000   Fri, 17 Oct 2025 19:40:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:40:58 +0000   Fri, 17 Oct 2025 19:40:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:40:58 +0000   Fri, 17 Oct 2025 19:40:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:40:58 +0000   Fri, 17 Oct 2025 19:40:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-599709
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                4ab96baf-e93c-4e34-b927-fdc987244361
	  Boot ID:                    c8616e78-d085-41cd-a329-f2bcfd9cfa15
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-v8hls                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-embed-certs-599709                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-sj7sj                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-embed-certs-599709             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-embed-certs-599709    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-l2pwz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-embed-certs-599709             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node embed-certs-599709 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node embed-certs-599709 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node embed-certs-599709 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node embed-certs-599709 event: Registered Node embed-certs-599709 in Controller
	  Normal  NodeReady                13s   kubelet          Node embed-certs-599709 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 d1 49 91 03 c2 08 06
	[  +0.000804] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 a9 2b 44 da ae 08 06
	[Oct17 18:59] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022229] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023876] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.024898] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +2.047801] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +4.031525] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:00] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +16.382262] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +32.252567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	
	
	==> etcd [5c7fbefe110f253c9bb80bb277e1601b7ea433f8529891e6bba5ddf6d35aa441] <==
	{"level":"warn","ts":"2025-10-17T19:40:38.601797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.611003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.619017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.630567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.639371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.650885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.659520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.668641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.678087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.686215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.693358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.700862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.709177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.717624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.725401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.732788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.741503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.749639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.757008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.764645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.773085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.788546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.795794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.802569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:40:38.880819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37414","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:41:11 up  3:23,  0 user,  load average: 3.77, 3.26, 2.03
	Linux embed-certs-599709 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5845dc85fc524693f6167936326d3a8f692b9c12b954fe6dcf363e6307ddce25] <==
	I1017 19:40:48.120748       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 19:40:48.121134       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1017 19:40:48.121320       1 main.go:148] setting mtu 1500 for CNI 
	I1017 19:40:48.121340       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 19:40:48.121368       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T19:40:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 19:40:48.416841       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 19:40:48.416873       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 19:40:48.416887       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:40:48.417485       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 19:40:48.718569       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:40:48.720996       1 metrics.go:72] Registering metrics
	I1017 19:40:48.721292       1 controller.go:711] "Syncing nftables rules"
	I1017 19:40:58.418818       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 19:40:58.418896       1 main.go:301] handling current node
	I1017 19:41:08.420816       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 19:41:08.420960       1 main.go:301] handling current node
	
	
	==> kube-apiserver [457413f90d235ec7473c7afb7548fb52346777dd438700237d71fdc588cf50c0] <==
	I1017 19:40:39.535370       1 policy_source.go:240] refreshing policies
	I1017 19:40:39.570623       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:40:39.570706       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1017 19:40:39.571262       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 19:40:39.576430       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:40:39.576592       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 19:40:39.594873       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 19:40:40.367841       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 19:40:40.372002       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 19:40:40.372022       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:40:40.907366       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 19:40:40.948489       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 19:40:41.074030       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 19:40:41.082766       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1017 19:40:41.083916       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:40:41.088443       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 19:40:41.427230       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 19:40:41.922401       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 19:40:41.935725       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 19:40:41.945587       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 19:40:47.130665       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 19:40:47.432463       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:40:47.437013       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:40:47.531667       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1017 19:41:09.883668       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:35638: use of closed network connection
	
	
	==> kube-controller-manager [8bbd4a1dcccee2f61abb0131b98bda3037a529f535f7be97999a86b5a140bfc1] <==
	I1017 19:40:46.424905       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 19:40:46.424917       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 19:40:46.425289       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 19:40:46.425380       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 19:40:46.425486       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-599709"
	I1017 19:40:46.425535       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1017 19:40:46.425769       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 19:40:46.428959       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 19:40:46.429029       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 19:40:46.429336       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 19:40:46.429395       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 19:40:46.432511       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 19:40:46.432616       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 19:40:46.432725       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 19:40:46.432935       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1017 19:40:46.433020       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 19:40:46.433075       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 19:40:46.433446       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 19:40:46.433527       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 19:40:46.433565       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 19:40:46.438877       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1017 19:40:46.442104       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:40:46.445358       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 19:40:46.469799       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:41:01.427209       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5da3848c987f9cdc043d6974aeb96d28ad2b4739380b083faab9c089d34e4042] <==
	I1017 19:40:47.972261       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:40:48.029829       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:40:48.130413       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:40:48.130471       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1017 19:40:48.130598       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:40:48.152465       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:40:48.152549       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:40:48.159509       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:40:48.159901       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:40:48.159929       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:40:48.163607       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:40:48.163620       1 config.go:200] "Starting service config controller"
	I1017 19:40:48.163630       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:40:48.163636       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:40:48.163657       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:40:48.163663       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:40:48.163721       1 config.go:309] "Starting node config controller"
	I1017 19:40:48.163728       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:40:48.163734       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:40:48.263813       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 19:40:48.263824       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:40:48.263861       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e0bd07efd86d381eee3b43f1636a7923cb6ef22ec3a278005f72c4646a71846c] <==
	E1017 19:40:39.484033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:40:39.484086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:40:39.484126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 19:40:39.484328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:40:39.484393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 19:40:39.484460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:40:39.484857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 19:40:39.484932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:40:39.484937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:40:39.485055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 19:40:39.485162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 19:40:39.485193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:40:39.485217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 19:40:39.485250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:40:39.485276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 19:40:40.293780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 19:40:40.352320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:40:40.363888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 19:40:40.522338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 19:40:40.526418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:40:40.529706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:40:40.637408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 19:40:40.657907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1017 19:40:40.705796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1017 19:40:42.678458       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:40:42 embed-certs-599709 kubelet[1322]: E1017 19:40:42.799305    1322 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-embed-certs-599709\" already exists" pod="kube-system/kube-scheduler-embed-certs-599709"
	Oct 17 19:40:42 embed-certs-599709 kubelet[1322]: I1017 19:40:42.817174    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-599709" podStartSLOduration=1.817127374 podStartE2EDuration="1.817127374s" podCreationTimestamp="2025-10-17 19:40:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:40:42.816605176 +0000 UTC m=+1.156376284" watchObservedRunningTime="2025-10-17 19:40:42.817127374 +0000 UTC m=+1.156898482"
	Oct 17 19:40:42 embed-certs-599709 kubelet[1322]: I1017 19:40:42.834424    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-599709" podStartSLOduration=1.834401673 podStartE2EDuration="1.834401673s" podCreationTimestamp="2025-10-17 19:40:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:40:42.834306013 +0000 UTC m=+1.174077125" watchObservedRunningTime="2025-10-17 19:40:42.834401673 +0000 UTC m=+1.174172778"
	Oct 17 19:40:42 embed-certs-599709 kubelet[1322]: I1017 19:40:42.851041    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-599709" podStartSLOduration=1.851016886 podStartE2EDuration="1.851016886s" podCreationTimestamp="2025-10-17 19:40:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:40:42.85080957 +0000 UTC m=+1.190580695" watchObservedRunningTime="2025-10-17 19:40:42.851016886 +0000 UTC m=+1.190787995"
	Oct 17 19:40:42 embed-certs-599709 kubelet[1322]: I1017 19:40:42.864321    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-599709" podStartSLOduration=1.864297219 podStartE2EDuration="1.864297219s" podCreationTimestamp="2025-10-17 19:40:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:40:42.864293315 +0000 UTC m=+1.204064423" watchObservedRunningTime="2025-10-17 19:40:42.864297219 +0000 UTC m=+1.204068328"
	Oct 17 19:40:46 embed-certs-599709 kubelet[1322]: I1017 19:40:46.405597    1322 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 17 19:40:46 embed-certs-599709 kubelet[1322]: I1017 19:40:46.407076    1322 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 17 19:40:47 embed-certs-599709 kubelet[1322]: I1017 19:40:47.587645    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e5aa5b6-57e8-4ad9-9b23-53eeffd10715-lib-modules\") pod \"kindnet-sj7sj\" (UID: \"7e5aa5b6-57e8-4ad9-9b23-53eeffd10715\") " pod="kube-system/kindnet-sj7sj"
	Oct 17 19:40:47 embed-certs-599709 kubelet[1322]: I1017 19:40:47.587740    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfx2v\" (UniqueName: \"kubernetes.io/projected/7e5aa5b6-57e8-4ad9-9b23-53eeffd10715-kube-api-access-dfx2v\") pod \"kindnet-sj7sj\" (UID: \"7e5aa5b6-57e8-4ad9-9b23-53eeffd10715\") " pod="kube-system/kindnet-sj7sj"
	Oct 17 19:40:47 embed-certs-599709 kubelet[1322]: I1017 19:40:47.587777    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7e5aa5b6-57e8-4ad9-9b23-53eeffd10715-cni-cfg\") pod \"kindnet-sj7sj\" (UID: \"7e5aa5b6-57e8-4ad9-9b23-53eeffd10715\") " pod="kube-system/kindnet-sj7sj"
	Oct 17 19:40:47 embed-certs-599709 kubelet[1322]: I1017 19:40:47.587800    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e5aa5b6-57e8-4ad9-9b23-53eeffd10715-xtables-lock\") pod \"kindnet-sj7sj\" (UID: \"7e5aa5b6-57e8-4ad9-9b23-53eeffd10715\") " pod="kube-system/kindnet-sj7sj"
	Oct 17 19:40:47 embed-certs-599709 kubelet[1322]: I1017 19:40:47.587829    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ea9dbf3-19b4-4b54-95c1-df8fa679f2bb-xtables-lock\") pod \"kube-proxy-l2pwz\" (UID: \"1ea9dbf3-19b4-4b54-95c1-df8fa679f2bb\") " pod="kube-system/kube-proxy-l2pwz"
	Oct 17 19:40:47 embed-certs-599709 kubelet[1322]: I1017 19:40:47.587850    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1ea9dbf3-19b4-4b54-95c1-df8fa679f2bb-kube-proxy\") pod \"kube-proxy-l2pwz\" (UID: \"1ea9dbf3-19b4-4b54-95c1-df8fa679f2bb\") " pod="kube-system/kube-proxy-l2pwz"
	Oct 17 19:40:47 embed-certs-599709 kubelet[1322]: I1017 19:40:47.587876    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ea9dbf3-19b4-4b54-95c1-df8fa679f2bb-lib-modules\") pod \"kube-proxy-l2pwz\" (UID: \"1ea9dbf3-19b4-4b54-95c1-df8fa679f2bb\") " pod="kube-system/kube-proxy-l2pwz"
	Oct 17 19:40:47 embed-certs-599709 kubelet[1322]: I1017 19:40:47.587913    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpfzr\" (UniqueName: \"kubernetes.io/projected/1ea9dbf3-19b4-4b54-95c1-df8fa679f2bb-kube-api-access-kpfzr\") pod \"kube-proxy-l2pwz\" (UID: \"1ea9dbf3-19b4-4b54-95c1-df8fa679f2bb\") " pod="kube-system/kube-proxy-l2pwz"
	Oct 17 19:40:48 embed-certs-599709 kubelet[1322]: I1017 19:40:48.821857    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-sj7sj" podStartSLOduration=1.821835498 podStartE2EDuration="1.821835498s" podCreationTimestamp="2025-10-17 19:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:40:48.821760383 +0000 UTC m=+7.161531491" watchObservedRunningTime="2025-10-17 19:40:48.821835498 +0000 UTC m=+7.161606605"
	Oct 17 19:40:48 embed-certs-599709 kubelet[1322]: I1017 19:40:48.837135    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l2pwz" podStartSLOduration=1.8370908830000001 podStartE2EDuration="1.837090883s" podCreationTimestamp="2025-10-17 19:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:40:48.83698369 +0000 UTC m=+7.176754799" watchObservedRunningTime="2025-10-17 19:40:48.837090883 +0000 UTC m=+7.176861991"
	Oct 17 19:40:58 embed-certs-599709 kubelet[1322]: I1017 19:40:58.493908    1322 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 17 19:40:58 embed-certs-599709 kubelet[1322]: I1017 19:40:58.561624    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2d8a3a4d-3738-4d33-98fd-b99622f860ec-tmp\") pod \"storage-provisioner\" (UID: \"2d8a3a4d-3738-4d33-98fd-b99622f860ec\") " pod="kube-system/storage-provisioner"
	Oct 17 19:40:58 embed-certs-599709 kubelet[1322]: I1017 19:40:58.561696    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5c14de3-5736-4bb4-b7d4-7eee1aade5e2-config-volume\") pod \"coredns-66bc5c9577-v8hls\" (UID: \"a5c14de3-5736-4bb4-b7d4-7eee1aade5e2\") " pod="kube-system/coredns-66bc5c9577-v8hls"
	Oct 17 19:40:58 embed-certs-599709 kubelet[1322]: I1017 19:40:58.561725    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s4rx\" (UniqueName: \"kubernetes.io/projected/a5c14de3-5736-4bb4-b7d4-7eee1aade5e2-kube-api-access-2s4rx\") pod \"coredns-66bc5c9577-v8hls\" (UID: \"a5c14de3-5736-4bb4-b7d4-7eee1aade5e2\") " pod="kube-system/coredns-66bc5c9577-v8hls"
	Oct 17 19:40:58 embed-certs-599709 kubelet[1322]: I1017 19:40:58.561813    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxpmc\" (UniqueName: \"kubernetes.io/projected/2d8a3a4d-3738-4d33-98fd-b99622f860ec-kube-api-access-hxpmc\") pod \"storage-provisioner\" (UID: \"2d8a3a4d-3738-4d33-98fd-b99622f860ec\") " pod="kube-system/storage-provisioner"
	Oct 17 19:40:59 embed-certs-599709 kubelet[1322]: I1017 19:40:59.853302    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-v8hls" podStartSLOduration=12.853276163 podStartE2EDuration="12.853276163s" podCreationTimestamp="2025-10-17 19:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:40:59.852989197 +0000 UTC m=+18.192760308" watchObservedRunningTime="2025-10-17 19:40:59.853276163 +0000 UTC m=+18.193047272"
	Oct 17 19:40:59 embed-certs-599709 kubelet[1322]: I1017 19:40:59.863408    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.863383976 podStartE2EDuration="11.863383976s" podCreationTimestamp="2025-10-17 19:40:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:40:59.863156503 +0000 UTC m=+18.202927613" watchObservedRunningTime="2025-10-17 19:40:59.863383976 +0000 UTC m=+18.203155084"
	Oct 17 19:41:01 embed-certs-599709 kubelet[1322]: I1017 19:41:01.884639    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rch9x\" (UniqueName: \"kubernetes.io/projected/970734d3-e268-47f0-9b00-efa6c26f8740-kube-api-access-rch9x\") pod \"busybox\" (UID: \"970734d3-e268-47f0-9b00-efa6c26f8740\") " pod="default/busybox"
	
	
	==> storage-provisioner [07f0acd0286f8f571803c4c85d4ed3df64fc1f61d9c393b3ebf5ed606497defc] <==
	I1017 19:40:58.899023       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 19:40:58.909656       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 19:40:58.909722       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 19:40:58.912178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:40:58.917283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 19:40:58.917497       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 19:40:58.917647       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa1d4ee3-e6c8-4c5a-aa5e-ad86f7d4d22b", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-599709_554011d4-b501-4493-8eea-9c23922e5f0c became leader
	I1017 19:40:58.917676       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-599709_554011d4-b501-4493-8eea-9c23922e5f0c!
	W1017 19:40:58.920610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:40:58.925504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 19:40:59.018769       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-599709_554011d4-b501-4493-8eea-9c23922e5f0c!
	W1017 19:41:00.928808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:41:00.933507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:41:02.938410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:41:02.943438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:41:04.947515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:41:04.951864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:41:06.955184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:41:06.960515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:41:08.964214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:41:08.969351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:41:10.972511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:41:10.978027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-599709 -n embed-certs-599709
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-599709 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-907112 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-907112 --alsologtostderr -v=1: exit status 80 (1.563789763s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-907112 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:41:34.278470  742363 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:41:34.278619  742363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:41:34.278630  742363 out.go:374] Setting ErrFile to fd 2...
	I1017 19:41:34.278637  742363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:41:34.278904  742363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:41:34.279253  742363 out.go:368] Setting JSON to false
	I1017 19:41:34.279325  742363 mustload.go:65] Loading cluster: old-k8s-version-907112
	I1017 19:41:34.279846  742363 config.go:182] Loaded profile config "old-k8s-version-907112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1017 19:41:34.280365  742363 cli_runner.go:164] Run: docker container inspect old-k8s-version-907112 --format={{.State.Status}}
	I1017 19:41:34.300838  742363 host.go:66] Checking if "old-k8s-version-907112" exists ...
	I1017 19:41:34.301126  742363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:41:34.366847  742363 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-17 19:41:34.354909997 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:41:34.367802  742363 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-907112 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 19:41:34.370030  742363 out.go:179] * Pausing node old-k8s-version-907112 ... 
	I1017 19:41:34.371623  742363 host.go:66] Checking if "old-k8s-version-907112" exists ...
	I1017 19:41:34.371992  742363 ssh_runner.go:195] Run: systemctl --version
	I1017 19:41:34.372057  742363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907112
	I1017 19:41:34.391856  742363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/old-k8s-version-907112/id_rsa Username:docker}
	I1017 19:41:34.495642  742363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:41:34.512284  742363 pause.go:52] kubelet running: true
	I1017 19:41:34.512387  742363 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 19:41:34.711947  742363 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 19:41:34.712059  742363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 19:41:34.795614  742363 cri.go:89] found id: "ca3002e51fbb7b46eb280826e262993a5bea288bfe8287e1a0d672392d3182f5"
	I1017 19:41:34.795634  742363 cri.go:89] found id: "850f097d87c9ee81fbb9873f23093120c53509fa1c290e387feea69404395a62"
	I1017 19:41:34.795639  742363 cri.go:89] found id: "1f22d5826138c6ffba6839da3f8f7c8bad03751a7d19957f7e94844c9d6c7fbf"
	I1017 19:41:34.795641  742363 cri.go:89] found id: "322480c43ff27fa7f365721afe1c5e3daaa5de2dc117b038c5cef04c9f210e44"
	I1017 19:41:34.795644  742363 cri.go:89] found id: "52ccc49f9b576e337a415e132dddb263f30a654ae3f3c7a05451e7f01db3687f"
	I1017 19:41:34.795647  742363 cri.go:89] found id: "054c0ba11919a27c613a43b0283529cadb5c43fac2b53a9bac2aaa468326a52d"
	I1017 19:41:34.795649  742363 cri.go:89] found id: "6f75954cb97693039a7a28b7e532c1cda8aaba2ac4c24c3d853c709e351d3c90"
	I1017 19:41:34.795652  742363 cri.go:89] found id: "059b93c2a1d4e2bc4bdba5fd8d096798638e1a2899fc8316153e0e2480d7fc01"
	I1017 19:41:34.795654  742363 cri.go:89] found id: "0aa671be2daa82154fa84103fd15b8447d2b25c3049ce697edb71872df1653db"
	I1017 19:41:34.795662  742363 cri.go:89] found id: "da0f545a9585922305e1f2b72786c36437454935dc6843626a40eaddd980c678"
	I1017 19:41:34.795665  742363 cri.go:89] found id: "e12315128e22de98b736c6a0aef19edd3e650649a5ea832a8c589ed2015cd1d4"
	I1017 19:41:34.795668  742363 cri.go:89] found id: ""
	I1017 19:41:34.795718  742363 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:41:34.808161  742363 retry.go:31] will retry after 203.513723ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:41:34Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:41:35.012533  742363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:41:35.026809  742363 pause.go:52] kubelet running: false
	I1017 19:41:35.026880  742363 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 19:41:35.186203  742363 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 19:41:35.186292  742363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 19:41:35.262021  742363 cri.go:89] found id: "ca3002e51fbb7b46eb280826e262993a5bea288bfe8287e1a0d672392d3182f5"
	I1017 19:41:35.262040  742363 cri.go:89] found id: "850f097d87c9ee81fbb9873f23093120c53509fa1c290e387feea69404395a62"
	I1017 19:41:35.262045  742363 cri.go:89] found id: "1f22d5826138c6ffba6839da3f8f7c8bad03751a7d19957f7e94844c9d6c7fbf"
	I1017 19:41:35.262050  742363 cri.go:89] found id: "322480c43ff27fa7f365721afe1c5e3daaa5de2dc117b038c5cef04c9f210e44"
	I1017 19:41:35.262054  742363 cri.go:89] found id: "52ccc49f9b576e337a415e132dddb263f30a654ae3f3c7a05451e7f01db3687f"
	I1017 19:41:35.262059  742363 cri.go:89] found id: "054c0ba11919a27c613a43b0283529cadb5c43fac2b53a9bac2aaa468326a52d"
	I1017 19:41:35.262063  742363 cri.go:89] found id: "6f75954cb97693039a7a28b7e532c1cda8aaba2ac4c24c3d853c709e351d3c90"
	I1017 19:41:35.262067  742363 cri.go:89] found id: "059b93c2a1d4e2bc4bdba5fd8d096798638e1a2899fc8316153e0e2480d7fc01"
	I1017 19:41:35.262071  742363 cri.go:89] found id: "0aa671be2daa82154fa84103fd15b8447d2b25c3049ce697edb71872df1653db"
	I1017 19:41:35.262086  742363 cri.go:89] found id: "da0f545a9585922305e1f2b72786c36437454935dc6843626a40eaddd980c678"
	I1017 19:41:35.262091  742363 cri.go:89] found id: "e12315128e22de98b736c6a0aef19edd3e650649a5ea832a8c589ed2015cd1d4"
	I1017 19:41:35.262094  742363 cri.go:89] found id: ""
	I1017 19:41:35.262138  742363 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:41:35.274853  742363 retry.go:31] will retry after 219.472315ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:41:35Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:41:35.495310  742363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:41:35.508945  742363 pause.go:52] kubelet running: false
	I1017 19:41:35.509012  742363 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 19:41:35.668904  742363 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 19:41:35.668972  742363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 19:41:35.757939  742363 cri.go:89] found id: "ca3002e51fbb7b46eb280826e262993a5bea288bfe8287e1a0d672392d3182f5"
	I1017 19:41:35.757971  742363 cri.go:89] found id: "850f097d87c9ee81fbb9873f23093120c53509fa1c290e387feea69404395a62"
	I1017 19:41:35.757977  742363 cri.go:89] found id: "1f22d5826138c6ffba6839da3f8f7c8bad03751a7d19957f7e94844c9d6c7fbf"
	I1017 19:41:35.757996  742363 cri.go:89] found id: "322480c43ff27fa7f365721afe1c5e3daaa5de2dc117b038c5cef04c9f210e44"
	I1017 19:41:35.758001  742363 cri.go:89] found id: "52ccc49f9b576e337a415e132dddb263f30a654ae3f3c7a05451e7f01db3687f"
	I1017 19:41:35.758006  742363 cri.go:89] found id: "054c0ba11919a27c613a43b0283529cadb5c43fac2b53a9bac2aaa468326a52d"
	I1017 19:41:35.758010  742363 cri.go:89] found id: "6f75954cb97693039a7a28b7e532c1cda8aaba2ac4c24c3d853c709e351d3c90"
	I1017 19:41:35.758014  742363 cri.go:89] found id: "059b93c2a1d4e2bc4bdba5fd8d096798638e1a2899fc8316153e0e2480d7fc01"
	I1017 19:41:35.758018  742363 cri.go:89] found id: "0aa671be2daa82154fa84103fd15b8447d2b25c3049ce697edb71872df1653db"
	I1017 19:41:35.758026  742363 cri.go:89] found id: "da0f545a9585922305e1f2b72786c36437454935dc6843626a40eaddd980c678"
	I1017 19:41:35.758031  742363 cri.go:89] found id: "e12315128e22de98b736c6a0aef19edd3e650649a5ea832a8c589ed2015cd1d4"
	I1017 19:41:35.758035  742363 cri.go:89] found id: ""
	I1017 19:41:35.758091  742363 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:41:35.776629  742363 out.go:203] 
	W1017 19:41:35.778136  742363 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:41:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:41:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:41:35.778157  742363 out.go:285] * 
	* 
	W1017 19:41:35.783567  742363 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:41:35.785167  742363 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-907112 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-907112
helpers_test.go:243: (dbg) docker inspect old-k8s-version-907112:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c9e45391db92dcb1aa794e584027e0ef5db54bd40162863d9ac544d6e17efe69",
	        "Created": "2025-10-17T19:39:28.47315274Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 731223,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:40:38.76293349Z",
	            "FinishedAt": "2025-10-17T19:40:37.771398372Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/c9e45391db92dcb1aa794e584027e0ef5db54bd40162863d9ac544d6e17efe69/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c9e45391db92dcb1aa794e584027e0ef5db54bd40162863d9ac544d6e17efe69/hostname",
	        "HostsPath": "/var/lib/docker/containers/c9e45391db92dcb1aa794e584027e0ef5db54bd40162863d9ac544d6e17efe69/hosts",
	        "LogPath": "/var/lib/docker/containers/c9e45391db92dcb1aa794e584027e0ef5db54bd40162863d9ac544d6e17efe69/c9e45391db92dcb1aa794e584027e0ef5db54bd40162863d9ac544d6e17efe69-json.log",
	        "Name": "/old-k8s-version-907112",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-907112:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-907112",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c9e45391db92dcb1aa794e584027e0ef5db54bd40162863d9ac544d6e17efe69",
	                "LowerDir": "/var/lib/docker/overlay2/e6c30ea89b09c5e82cbf480acc38ef16124ed01036b47190b5890c66fdac61c3-init/diff:/var/lib/docker/overlay2/dbfb6a42e05d15debefb7c829b0dbabbe558b70da40f1ab4f30d27e0dda96088/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e6c30ea89b09c5e82cbf480acc38ef16124ed01036b47190b5890c66fdac61c3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e6c30ea89b09c5e82cbf480acc38ef16124ed01036b47190b5890c66fdac61c3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e6c30ea89b09c5e82cbf480acc38ef16124ed01036b47190b5890c66fdac61c3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-907112",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-907112/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-907112",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-907112",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-907112",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3d1befa4b40df50277f98cf930ec271cf0c416396cf6c083ddbeb7267616502c",
	            "SandboxKey": "/var/run/docker/netns/3d1befa4b40d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-907112": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:f9:3e:66:08:ba",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e97054581b64b00fcec9937bf013cc1657d289bfdedb4be6f078111f0c49299",
	                    "EndpointID": "9c66822723f974e4078495ccf27cd977cdf77f297268b6dda52691e36f33896d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-907112",
	                        "c9e45391db92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-907112 -n old-k8s-version-907112
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-907112 -n old-k8s-version-907112: exit status 2 (331.766838ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-907112 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-907112 logs -n 25: (1.41609955s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-448344 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo crio config                                                                                                                                                                                                             │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ delete  │ -p cilium-448344                                                                                                                                                                                                                              │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:39 UTC │
	│ start   │ -p old-k8s-version-907112 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:40 UTC │
	│ start   │ -p pause-022753 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-022753           │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:39 UTC │
	│ pause   │ -p pause-022753 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-022753           │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ delete  │ -p pause-022753                                                                                                                                                                                                                               │ pause-022753           │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:39 UTC │
	│ start   │ -p no-preload-171807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-171807      │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:40 UTC │
	│ start   │ -p cert-expiration-141205 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-141205 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ delete  │ -p cert-expiration-141205                                                                                                                                                                                                                     │ cert-expiration-141205 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-907112 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │                     │
	│ start   │ -p embed-certs-599709 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-599709     │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:41 UTC │
	│ stop    │ -p old-k8s-version-907112 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-907112 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ start   │ -p old-k8s-version-907112 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable metrics-server -p no-preload-171807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-171807      │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │                     │
	│ stop    │ -p no-preload-171807 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-171807      │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p no-preload-171807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-171807      │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-171807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-171807      │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-599709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-599709     │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	│ stop    │ -p embed-certs-599709 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-599709     │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p embed-certs-599709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-599709     │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ start   │ -p embed-certs-599709 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-599709     │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	│ image   │ old-k8s-version-907112 image list --format=json                                                                                                                                                                                               │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ pause   │ -p old-k8s-version-907112 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:41:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:41:28.776903  741107 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:41:28.777152  741107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:41:28.777161  741107 out.go:374] Setting ErrFile to fd 2...
	I1017 19:41:28.777165  741107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:41:28.777345  741107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:41:28.777840  741107 out.go:368] Setting JSON to false
	I1017 19:41:28.779161  741107 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12228,"bootTime":1760717861,"procs":337,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:41:28.779267  741107 start.go:141] virtualization: kvm guest
	I1017 19:41:28.781460  741107 out.go:179] * [embed-certs-599709] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:41:28.782804  741107 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:41:28.782825  741107 notify.go:220] Checking for updates...
	I1017 19:41:28.785149  741107 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:41:28.786410  741107 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:41:28.787859  741107 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 19:41:28.789143  741107 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:41:28.790495  741107 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:41:28.792215  741107 config.go:182] Loaded profile config "embed-certs-599709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:41:28.792743  741107 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:41:28.817575  741107 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:41:28.817715  741107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:41:28.878339  741107 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 19:41:28.868171722 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:41:28.878459  741107 docker.go:318] overlay module found
	I1017 19:41:28.880360  741107 out.go:179] * Using the docker driver based on existing profile
	I1017 19:41:28.881537  741107 start.go:305] selected driver: docker
	I1017 19:41:28.881558  741107 start.go:925] validating driver "docker" against &{Name:embed-certs-599709 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-599709 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:41:28.881720  741107 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:41:28.882448  741107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:41:28.943349  741107 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 19:41:28.932946695 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:41:28.943767  741107 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:41:28.943811  741107 cni.go:84] Creating CNI manager for ""
	I1017 19:41:28.943874  741107 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:41:28.943932  741107 start.go:349] cluster config:
	{Name:embed-certs-599709 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-599709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:41:28.945883  741107 out.go:179] * Starting "embed-certs-599709" primary control-plane node in "embed-certs-599709" cluster
	I1017 19:41:28.947266  741107 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:41:28.948607  741107 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:41:28.949821  741107 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:41:28.949877  741107 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:41:28.949890  741107 cache.go:58] Caching tarball of preloaded images
	I1017 19:41:28.949937  741107 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:41:28.950025  741107 preload.go:233] Found /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:41:28.950041  741107 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:41:28.950163  741107 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/embed-certs-599709/config.json ...
	I1017 19:41:28.971852  741107 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:41:28.971884  741107 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:41:28.971903  741107 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:41:28.971935  741107 start.go:360] acquireMachinesLock for embed-certs-599709: {Name:mk6d9d5bfeac18abd5031b01da957aa047e89617 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:41:28.972014  741107 start.go:364] duration metric: took 53.276µs to acquireMachinesLock for "embed-certs-599709"
	I1017 19:41:28.972040  741107 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:41:28.972051  741107 fix.go:54] fixHost starting: 
	I1017 19:41:28.972372  741107 cli_runner.go:164] Run: docker container inspect embed-certs-599709 --format={{.State.Status}}
	I1017 19:41:28.990663  741107 fix.go:112] recreateIfNeeded on embed-certs-599709: state=Stopped err=<nil>
	W1017 19:41:28.990709  741107 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:41:27.730829  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:41:27.731318  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:41:27.731396  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:41:27.731466  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:41:27.762535  696997 cri.go:89] found id: "5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:27.762558  696997 cri.go:89] found id: ""
	I1017 19:41:27.762570  696997 logs.go:282] 1 containers: [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690]
	I1017 19:41:27.762628  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:27.767925  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:41:27.768002  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:41:27.805962  696997 cri.go:89] found id: ""
	I1017 19:41:27.805990  696997 logs.go:282] 0 containers: []
	W1017 19:41:27.805999  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:41:27.806006  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:41:27.806093  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:41:27.836319  696997 cri.go:89] found id: ""
	I1017 19:41:27.836345  696997 logs.go:282] 0 containers: []
	W1017 19:41:27.836353  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:41:27.836359  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:41:27.836406  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:41:27.865606  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:27.865640  696997 cri.go:89] found id: ""
	I1017 19:41:27.865652  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:41:27.865750  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:27.870036  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:41:27.870110  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:41:27.900178  696997 cri.go:89] found id: ""
	I1017 19:41:27.900206  696997 logs.go:282] 0 containers: []
	W1017 19:41:27.900219  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:41:27.900227  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:41:27.900302  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:41:27.930323  696997 cri.go:89] found id: "ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:27.930347  696997 cri.go:89] found id: ""
	I1017 19:41:27.930355  696997 logs.go:282] 1 containers: [ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770]
	I1017 19:41:27.930403  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:27.934773  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:41:27.934852  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:41:27.963911  696997 cri.go:89] found id: ""
	I1017 19:41:27.963939  696997 logs.go:282] 0 containers: []
	W1017 19:41:27.963948  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:41:27.963954  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:41:27.964017  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:41:27.995252  696997 cri.go:89] found id: ""
	I1017 19:41:27.995282  696997 logs.go:282] 0 containers: []
	W1017 19:41:27.995293  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:41:27.995303  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:41:27.995319  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:28.055976  696997 logs.go:123] Gathering logs for kube-controller-manager [ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770] ...
	I1017 19:41:28.056017  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:28.091298  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:41:28.091324  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:41:28.163533  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:41:28.163571  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:41:28.198974  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:41:28.199010  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:41:28.289862  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:41:28.289901  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:41:28.309874  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:41:28.309910  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:41:28.374759  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:41:28.374779  696997 logs.go:123] Gathering logs for kube-apiserver [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690] ...
	I1017 19:41:28.374797  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:30.911755  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:41:30.912217  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:41:30.912268  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:41:30.912326  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:41:30.941583  696997 cri.go:89] found id: "5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:30.941612  696997 cri.go:89] found id: ""
	I1017 19:41:30.941622  696997 logs.go:282] 1 containers: [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690]
	I1017 19:41:30.941711  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:30.945946  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:41:30.946016  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:41:30.974828  696997 cri.go:89] found id: ""
	I1017 19:41:30.974862  696997 logs.go:282] 0 containers: []
	W1017 19:41:30.974875  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:41:30.974883  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:41:30.974947  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:41:31.004810  696997 cri.go:89] found id: ""
	I1017 19:41:31.004843  696997 logs.go:282] 0 containers: []
	W1017 19:41:31.004852  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:41:31.004858  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:41:31.004919  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:41:31.033736  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:31.033760  696997 cri.go:89] found id: ""
	I1017 19:41:31.033769  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:41:31.033835  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:31.038103  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:41:31.038163  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:41:31.066190  696997 cri.go:89] found id: ""
	I1017 19:41:31.066214  696997 logs.go:282] 0 containers: []
	W1017 19:41:31.066223  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:41:31.066229  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:41:31.066280  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:41:31.095424  696997 cri.go:89] found id: "ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:31.095449  696997 cri.go:89] found id: ""
	I1017 19:41:31.095457  696997 logs.go:282] 1 containers: [ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770]
	I1017 19:41:31.095517  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:31.099839  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:41:31.099906  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:41:31.129871  696997 cri.go:89] found id: ""
	I1017 19:41:31.129895  696997 logs.go:282] 0 containers: []
	W1017 19:41:31.129904  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:41:31.129909  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:41:31.129970  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:41:31.158078  696997 cri.go:89] found id: ""
	I1017 19:41:31.158112  696997 logs.go:282] 0 containers: []
	W1017 19:41:31.158124  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:41:31.158136  696997 logs.go:123] Gathering logs for kube-apiserver [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690] ...
	I1017 19:41:31.158154  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:31.192270  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:41:31.192317  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:31.247618  696997 logs.go:123] Gathering logs for kube-controller-manager [ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770] ...
	I1017 19:41:31.247657  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:31.278045  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:41:31.278078  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:41:31.333379  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:41:31.333420  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:41:31.368614  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:41:31.368647  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:41:31.461595  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:41:31.461641  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:41:31.480327  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:41:31.480363  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:41:31.540809  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1017 19:41:29.589301  736846 pod_ready.go:104] pod "coredns-66bc5c9577-gnx5k" is not "Ready", error: <nil>
	W1017 19:41:32.088925  736846 pod_ready.go:104] pod "coredns-66bc5c9577-gnx5k" is not "Ready", error: <nil>
	I1017 19:41:28.992499  741107 out.go:252] * Restarting existing docker container for "embed-certs-599709" ...
	I1017 19:41:28.992601  741107 cli_runner.go:164] Run: docker start embed-certs-599709
	I1017 19:41:29.244135  741107 cli_runner.go:164] Run: docker container inspect embed-certs-599709 --format={{.State.Status}}
	I1017 19:41:29.264188  741107 kic.go:430] container "embed-certs-599709" state is running.
	I1017 19:41:29.264598  741107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-599709
	I1017 19:41:29.283671  741107 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/embed-certs-599709/config.json ...
	I1017 19:41:29.283916  741107 machine.go:93] provisionDockerMachine start ...
	I1017 19:41:29.283986  741107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-599709
	I1017 19:41:29.302120  741107 main.go:141] libmachine: Using SSH client type: native
	I1017 19:41:29.302403  741107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1017 19:41:29.302419  741107 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:41:29.303148  741107 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60450->127.0.0.1:33448: read: connection reset by peer
	I1017 19:41:32.440087  741107 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-599709
	
	I1017 19:41:32.440121  741107 ubuntu.go:182] provisioning hostname "embed-certs-599709"
	I1017 19:41:32.440181  741107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-599709
	I1017 19:41:32.458980  741107 main.go:141] libmachine: Using SSH client type: native
	I1017 19:41:32.459294  741107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1017 19:41:32.459317  741107 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-599709 && echo "embed-certs-599709" | sudo tee /etc/hostname
	I1017 19:41:32.605924  741107 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-599709
	
	I1017 19:41:32.606028  741107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-599709
	I1017 19:41:32.625252  741107 main.go:141] libmachine: Using SSH client type: native
	I1017 19:41:32.625481  741107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1017 19:41:32.625498  741107 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-599709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-599709/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-599709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:41:32.764947  741107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:41:32.764983  741107 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-492109/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-492109/.minikube}
	I1017 19:41:32.765029  741107 ubuntu.go:190] setting up certificates
	I1017 19:41:32.765047  741107 provision.go:84] configureAuth start
	I1017 19:41:32.765112  741107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-599709
	I1017 19:41:32.784192  741107 provision.go:143] copyHostCerts
	I1017 19:41:32.784263  741107 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem, removing ...
	I1017 19:41:32.784285  741107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem
	I1017 19:41:32.784403  741107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem (1078 bytes)
	I1017 19:41:32.784553  741107 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem, removing ...
	I1017 19:41:32.784567  741107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem
	I1017 19:41:32.784608  741107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem (1123 bytes)
	I1017 19:41:32.784718  741107 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem, removing ...
	I1017 19:41:32.784730  741107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem
	I1017 19:41:32.784768  741107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem (1679 bytes)
	I1017 19:41:32.784843  741107 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem org=jenkins.embed-certs-599709 san=[127.0.0.1 192.168.94.2 embed-certs-599709 localhost minikube]
	I1017 19:41:33.303638  741107 provision.go:177] copyRemoteCerts
	I1017 19:41:33.303715  741107 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:41:33.303776  741107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-599709
	I1017 19:41:33.323063  741107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/embed-certs-599709/id_rsa Username:docker}
	I1017 19:41:33.422113  741107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 19:41:33.441379  741107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 19:41:33.461030  741107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:41:33.480601  741107 provision.go:87] duration metric: took 715.533515ms to configureAuth
	I1017 19:41:33.480633  741107 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:41:33.480869  741107 config.go:182] Loaded profile config "embed-certs-599709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:41:33.481010  741107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-599709
	I1017 19:41:33.500229  741107 main.go:141] libmachine: Using SSH client type: native
	I1017 19:41:33.500459  741107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1017 19:41:33.500475  741107 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:41:33.804986  741107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:41:33.805017  741107 machine.go:96] duration metric: took 4.521082611s to provisionDockerMachine
	I1017 19:41:33.805033  741107 start.go:293] postStartSetup for "embed-certs-599709" (driver="docker")
	I1017 19:41:33.805100  741107 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:41:33.805176  741107 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:41:33.805248  741107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-599709
	I1017 19:41:33.824818  741107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/embed-certs-599709/id_rsa Username:docker}
	I1017 19:41:33.925249  741107 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:41:33.929521  741107 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:41:33.929551  741107 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:41:33.929564  741107 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/addons for local assets ...
	I1017 19:41:33.929620  741107 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/files for local assets ...
	I1017 19:41:33.929731  741107 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem -> 4957252.pem in /etc/ssl/certs
	I1017 19:41:33.929836  741107 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:41:33.938676  741107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem --> /etc/ssl/certs/4957252.pem (1708 bytes)
	I1017 19:41:33.958966  741107 start.go:296] duration metric: took 153.913149ms for postStartSetup
	I1017 19:41:33.959079  741107 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:41:33.959139  741107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-599709
	I1017 19:41:33.980109  741107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/embed-certs-599709/id_rsa Username:docker}
	I1017 19:41:34.080819  741107 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:41:34.086226  741107 fix.go:56] duration metric: took 5.114167639s for fixHost
	I1017 19:41:34.086267  741107 start.go:83] releasing machines lock for "embed-certs-599709", held for 5.114226851s
	I1017 19:41:34.086343  741107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-599709
	I1017 19:41:34.107404  741107 ssh_runner.go:195] Run: cat /version.json
	I1017 19:41:34.107468  741107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-599709
	I1017 19:41:34.107540  741107 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:41:34.107625  741107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-599709
	I1017 19:41:34.128715  741107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/embed-certs-599709/id_rsa Username:docker}
	I1017 19:41:34.130209  741107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/embed-certs-599709/id_rsa Username:docker}
	I1017 19:41:34.231374  741107 ssh_runner.go:195] Run: systemctl --version
	I1017 19:41:34.315600  741107 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:41:34.365925  741107 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:41:34.372122  741107 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:41:34.372191  741107 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:41:34.383150  741107 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:41:34.383176  741107 start.go:495] detecting cgroup driver to use...
	I1017 19:41:34.383208  741107 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 19:41:34.383261  741107 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:41:34.402515  741107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:41:34.417356  741107 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:41:34.417437  741107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:41:34.434531  741107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:41:34.450897  741107 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:41:34.557072  741107 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:41:34.660030  741107 docker.go:234] disabling docker service ...
	I1017 19:41:34.660143  741107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:41:34.678438  741107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:41:34.694967  741107 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:41:34.793751  741107 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:41:34.881231  741107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:41:34.894850  741107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:41:34.911057  741107 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:41:34.911126  741107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:34.922554  741107 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 19:41:34.922649  741107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:34.933539  741107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:34.943614  741107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:34.953691  741107 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:41:34.962783  741107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:34.972580  741107 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:34.982851  741107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:34.993148  741107 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:41:35.002270  741107 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:41:35.011593  741107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:41:35.103087  741107 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:41:35.221522  741107 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:41:35.221591  741107 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:41:35.226039  741107 start.go:563] Will wait 60s for crictl version
	I1017 19:41:35.226117  741107 ssh_runner.go:195] Run: which crictl
	I1017 19:41:35.230524  741107 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:41:35.261310  741107 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:41:35.261398  741107 ssh_runner.go:195] Run: crio --version
	I1017 19:41:35.292143  741107 ssh_runner.go:195] Run: crio --version
	I1017 19:41:35.326171  741107 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 17 19:41:09 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:09.905330233Z" level=info msg="Created container e12315128e22de98b736c6a0aef19edd3e650649a5ea832a8c589ed2015cd1d4: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lh28q/kubernetes-dashboard" id=2a470f01-d498-49ce-a078-1afa99c106d3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:41:09 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:09.906136811Z" level=info msg="Starting container: e12315128e22de98b736c6a0aef19edd3e650649a5ea832a8c589ed2015cd1d4" id=5dffc633-55b8-4724-827b-f2dadda0ed67 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:41:09 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:09.908215313Z" level=info msg="Started container" PID=1719 containerID=e12315128e22de98b736c6a0aef19edd3e650649a5ea832a8c589ed2015cd1d4 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lh28q/kubernetes-dashboard id=5dffc633-55b8-4724-827b-f2dadda0ed67 name=/runtime.v1.RuntimeService/StartContainer sandboxID=93b5b8e78ff7347400650d49c608c4a2f23de482161e8a32d8f920b111f9e51a
	Oct 17 19:41:20 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:20.459272835Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cdad3b11-21c3-431f-bf92-9b346d1159c8 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:41:20 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:20.460193198Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cdbc5241-b976-4556-bcd1-c2ca242f437e name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:41:20 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:20.461306327Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2292ff59-d379-4be0-ac66-8788cdd692fd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:41:20 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:20.461772005Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:20 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:20.466549321Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:20 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:20.466788486Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e37da2b9f2ee2c8a48cb7b4f60d856043fbe671dab1c590ec763e010abca1ddb/merged/etc/passwd: no such file or directory"
	Oct 17 19:41:20 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:20.466825142Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e37da2b9f2ee2c8a48cb7b4f60d856043fbe671dab1c590ec763e010abca1ddb/merged/etc/group: no such file or directory"
	Oct 17 19:41:20 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:20.467070516Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:20 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:20.494914691Z" level=info msg="Created container ca3002e51fbb7b46eb280826e262993a5bea288bfe8287e1a0d672392d3182f5: kube-system/storage-provisioner/storage-provisioner" id=2292ff59-d379-4be0-ac66-8788cdd692fd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:41:20 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:20.495627677Z" level=info msg="Starting container: ca3002e51fbb7b46eb280826e262993a5bea288bfe8287e1a0d672392d3182f5" id=74c0aaf9-6256-44cd-b6e1-f077c17239af name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:41:20 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:20.497579131Z" level=info msg="Started container" PID=1741 containerID=ca3002e51fbb7b46eb280826e262993a5bea288bfe8287e1a0d672392d3182f5 description=kube-system/storage-provisioner/storage-provisioner id=74c0aaf9-6256-44cd-b6e1-f077c17239af name=/runtime.v1.RuntimeService/StartContainer sandboxID=06574d67d4e7aab54c033b21728c1fc3206f0d091acd9d3d46e1e7de09d11549
	Oct 17 19:41:25 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:25.325811022Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=62e653b5-2d86-4f77-b726-1f3c6a2a84b6 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:41:25 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:25.326786222Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=163f7e02-6bb0-4201-8ccf-74aaa783cdff name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:41:25 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:25.327878114Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tts96/dashboard-metrics-scraper" id=f620a999-0987-4392-b35c-ca5be8b61618 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:41:25 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:25.328138781Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:25 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:25.336201634Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:25 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:25.336923186Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:25 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:25.373749736Z" level=info msg="Created container da0f545a9585922305e1f2b72786c36437454935dc6843626a40eaddd980c678: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tts96/dashboard-metrics-scraper" id=f620a999-0987-4392-b35c-ca5be8b61618 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:41:25 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:25.374473128Z" level=info msg="Starting container: da0f545a9585922305e1f2b72786c36437454935dc6843626a40eaddd980c678" id=5deb8571-9af1-4c84-ab09-86539bb13b80 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:41:25 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:25.37681392Z" level=info msg="Started container" PID=1778 containerID=da0f545a9585922305e1f2b72786c36437454935dc6843626a40eaddd980c678 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tts96/dashboard-metrics-scraper id=5deb8571-9af1-4c84-ab09-86539bb13b80 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7417532cdac64402fec20ca62dce6f73e115d6b5bb2773dbbc0ad33430799d35
	Oct 17 19:41:25 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:25.47874618Z" level=info msg="Removing container: 8df50d229107da72795fef3da5005c971bfdcb1b111d146da91ee87a2df81325" id=78556f42-dc15-48e1-a170-2bd9250786d9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:41:25 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:25.490594745Z" level=info msg="Removed container 8df50d229107da72795fef3da5005c971bfdcb1b111d146da91ee87a2df81325: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tts96/dashboard-metrics-scraper" id=78556f42-dc15-48e1-a170-2bd9250786d9 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	da0f545a95859       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   2                   7417532cdac64       dashboard-metrics-scraper-5f989dc9cf-tts96       kubernetes-dashboard
	ca3002e51fbb7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           16 seconds ago      Running             storage-provisioner         1                   06574d67d4e7a       storage-provisioner                              kube-system
	e12315128e22d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   27 seconds ago      Running             kubernetes-dashboard        0                   93b5b8e78ff73       kubernetes-dashboard-8694d4445c-lh28q            kubernetes-dashboard
	092bce2982d1a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           47 seconds ago      Running             busybox                     1                   30f4db7df0775       busybox                                          default
	850f097d87c9e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           47 seconds ago      Running             coredns                     0                   3b5f720bf5a6c       coredns-5dd5756b68-gnqx4                         kube-system
	1f22d5826138c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           47 seconds ago      Exited              storage-provisioner         0                   06574d67d4e7a       storage-provisioner                              kube-system
	322480c43ff27       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           47 seconds ago      Running             kindnet-cni                 0                   41ba6ea90c7d3       kindnet-2zq9g                                    kube-system
	52ccc49f9b576       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           47 seconds ago      Running             kube-proxy                  0                   b7fe4409fef94       kube-proxy-lzbjz                                 kube-system
	054c0ba11919a       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           51 seconds ago      Running             kube-apiserver              0                   1f785209d980f       kube-apiserver-old-k8s-version-907112            kube-system
	6f75954cb9769       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           51 seconds ago      Running             etcd                        0                   2fe6486f7e597       etcd-old-k8s-version-907112                      kube-system
	059b93c2a1d4e       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           51 seconds ago      Running             kube-controller-manager     0                   80f9b80fdd8c1       kube-controller-manager-old-k8s-version-907112   kube-system
	0aa671be2daa8       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           51 seconds ago      Running             kube-scheduler              0                   2f372e555ac9e       kube-scheduler-old-k8s-version-907112            kube-system
	
	
	==> coredns [850f097d87c9ee81fbb9873f23093120c53509fa1c290e387feea69404395a62] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45391 - 59905 "HINFO IN 3593438954215795362.5115465353772403331. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.066332796s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-907112
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-907112
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=old-k8s-version-907112
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_39_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:39:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-907112
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:41:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:41:19 +0000   Fri, 17 Oct 2025 19:39:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:41:19 +0000   Fri, 17 Oct 2025 19:39:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:41:19 +0000   Fri, 17 Oct 2025 19:39:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:41:19 +0000   Fri, 17 Oct 2025 19:40:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-907112
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                b9d63c36-87df-4fe2-81c2-a81cd9f5ae31
	  Boot ID:                    c8616e78-d085-41cd-a329-f2bcfd9cfa15
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-5dd5756b68-gnqx4                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     102s
	  kube-system                 etcd-old-k8s-version-907112                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-2zq9g                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      102s
	  kube-system                 kube-apiserver-old-k8s-version-907112             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-old-k8s-version-907112    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-lzbjz                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-old-k8s-version-907112             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-tts96        0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-lh28q             0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 101s               kube-proxy       
	  Normal  Starting                 47s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node old-k8s-version-907112 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node old-k8s-version-907112 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node old-k8s-version-907112 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           102s               node-controller  Node old-k8s-version-907112 event: Registered Node old-k8s-version-907112 in Controller
	  Normal  NodeReady                89s                kubelet          Node old-k8s-version-907112 status is now: NodeReady
	  Normal  Starting                 52s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)  kubelet          Node old-k8s-version-907112 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)  kubelet          Node old-k8s-version-907112 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)  kubelet          Node old-k8s-version-907112 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36s                node-controller  Node old-k8s-version-907112 event: Registered Node old-k8s-version-907112 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 d1 49 91 03 c2 08 06
	[  +0.000804] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 a9 2b 44 da ae 08 06
	[Oct17 18:59] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022229] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023876] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.024898] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +2.047801] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +4.031525] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:00] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +16.382262] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +32.252567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	
	
	==> etcd [6f75954cb97693039a7a28b7e532c1cda8aaba2ac4c24c3d853c709e351d3c90] <==
	{"level":"info","ts":"2025-10-17T19:40:45.912289Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-17T19:40:45.912406Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T19:40:45.912436Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T19:40:45.912424Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-17T19:40:45.912481Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-17T19:40:45.912492Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-17T19:40:45.914723Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-17T19:40:45.91478Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-17T19:40:45.914799Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-17T19:40:45.914984Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-17T19:40:45.915019Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-17T19:40:47.601127Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-17T19:40:47.601183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-17T19:40:47.60123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-17T19:40:47.601248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-17T19:40:47.601256Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-17T19:40:47.601279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-17T19:40:47.601288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-17T19:40:47.603153Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T19:40:47.603157Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-907112 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-17T19:40:47.604396Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T19:40:47.604602Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-17T19:40:47.605662Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-17T19:40:47.606358Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-17T19:40:47.60638Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:41:37 up  3:23,  0 user,  load average: 3.42, 3.21, 2.05
	Linux old-k8s-version-907112 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [322480c43ff27fa7f365721afe1c5e3daaa5de2dc117b038c5cef04c9f210e44] <==
	I1017 19:40:49.936082       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 19:40:49.936567       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 19:40:49.936807       1 main.go:148] setting mtu 1500 for CNI 
	I1017 19:40:49.936827       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 19:40:49.936852       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T19:40:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 19:40:50.141244       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 19:40:50.141317       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 19:40:50.141334       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:40:50.141863       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 19:40:50.532590       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:40:50.532957       1 metrics.go:72] Registering metrics
	I1017 19:40:50.533055       1 controller.go:711] "Syncing nftables rules"
	I1017 19:41:00.141832       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:41:00.141929       1 main.go:301] handling current node
	I1017 19:41:10.143780       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:41:10.143837       1 main.go:301] handling current node
	I1017 19:41:20.141387       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:41:20.141427       1 main.go:301] handling current node
	I1017 19:41:30.142262       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:41:30.142310       1 main.go:301] handling current node
	
	
	==> kube-apiserver [054c0ba11919a27c613a43b0283529cadb5c43fac2b53a9bac2aaa468326a52d] <==
	I1017 19:40:48.784602       1 shared_informer.go:318] Caches are synced for configmaps
	I1017 19:40:48.784660       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1017 19:40:48.784705       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1017 19:40:48.785034       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1017 19:40:48.785255       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1017 19:40:48.786387       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1017 19:40:48.786434       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 19:40:48.786564       1 aggregator.go:166] initial CRD sync complete...
	I1017 19:40:48.786622       1 autoregister_controller.go:141] Starting autoregister controller
	I1017 19:40:48.786648       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 19:40:48.786674       1 cache.go:39] Caches are synced for autoregister controller
	I1017 19:40:48.789104       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1017 19:40:49.692615       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:40:49.836225       1 controller.go:624] quota admission added evaluator for: namespaces
	I1017 19:40:49.880111       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1017 19:40:49.901969       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 19:40:49.911308       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 19:40:49.920517       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1017 19:40:49.964929       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.106.230"}
	I1017 19:40:49.980595       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.7.112"}
	I1017 19:41:01.682563       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 19:41:01.682609       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 19:41:01.685586       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1017 19:41:01.879677       1 controller.go:624] quota admission added evaluator for: endpoints
	I1017 19:41:01.879676       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [059b93c2a1d4e2bc4bdba5fd8d096798638e1a2899fc8316153e0e2480d7fc01] <==
	I1017 19:41:01.703635       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-tts96"
	I1017 19:41:01.711858       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="21.20073ms"
	I1017 19:41:01.715701       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="24.924325ms"
	I1017 19:41:01.722302       1 shared_informer.go:318] Caches are synced for job
	I1017 19:41:01.723041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.217831ms"
	I1017 19:41:01.723195       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="99.348µs"
	I1017 19:41:01.730603       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="18.677505ms"
	I1017 19:41:01.730716       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="69.898µs"
	I1017 19:41:01.738244       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.2µs"
	I1017 19:41:01.868960       1 shared_informer.go:318] Caches are synced for endpoint
	I1017 19:41:01.873000       1 shared_informer.go:318] Caches are synced for resource quota
	I1017 19:41:01.877723       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1017 19:41:01.887456       1 shared_informer.go:318] Caches are synced for resource quota
	I1017 19:41:02.203308       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 19:41:02.219869       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 19:41:02.219905       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1017 19:41:04.423707       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="110.269µs"
	I1017 19:41:05.429241       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.753µs"
	I1017 19:41:06.431727       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.681µs"
	I1017 19:41:10.448883       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.051365ms"
	I1017 19:41:10.449012       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="67.503µs"
	I1017 19:41:20.611608       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.929304ms"
	I1017 19:41:20.611755       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.259µs"
	I1017 19:41:25.489754       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="111.068µs"
	I1017 19:41:32.025068       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.134µs"
	
	
	==> kube-proxy [52ccc49f9b576e337a415e132dddb263f30a654ae3f3c7a05451e7f01db3687f] <==
	I1017 19:40:49.769172       1 server_others.go:69] "Using iptables proxy"
	I1017 19:40:49.780414       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1017 19:40:49.808931       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:40:49.812083       1 server_others.go:152] "Using iptables Proxier"
	I1017 19:40:49.812121       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1017 19:40:49.812128       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1017 19:40:49.812163       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1017 19:40:49.812384       1 server.go:846] "Version info" version="v1.28.0"
	I1017 19:40:49.812393       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:40:49.813154       1 config.go:315] "Starting node config controller"
	I1017 19:40:49.813237       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1017 19:40:49.814766       1 config.go:188] "Starting service config controller"
	I1017 19:40:49.814932       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1017 19:40:49.814781       1 config.go:97] "Starting endpoint slice config controller"
	I1017 19:40:49.814979       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1017 19:40:49.915032       1 shared_informer.go:318] Caches are synced for service config
	I1017 19:40:49.915102       1 shared_informer.go:318] Caches are synced for node config
	I1017 19:40:49.915230       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0aa671be2daa82154fa84103fd15b8447d2b25c3049ce697edb71872df1653db] <==
	I1017 19:40:46.345197       1 serving.go:348] Generated self-signed cert in-memory
	W1017 19:40:48.717097       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 19:40:48.717144       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 19:40:48.717183       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 19:40:48.717192       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 19:40:48.753071       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1017 19:40:48.753103       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:40:48.754868       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:40:48.754926       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1017 19:40:48.756497       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1017 19:40:48.756659       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1017 19:40:48.856052       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 17 19:41:01 old-k8s-version-907112 kubelet[723]: I1017 19:41:01.711434     723 topology_manager.go:215] "Topology Admit Handler" podUID="a7ce6c10-a999-4cd3-99b7-8431fa62b484" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-tts96"
	Oct 17 19:41:01 old-k8s-version-907112 kubelet[723]: I1017 19:41:01.794121     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a7ce6c10-a999-4cd3-99b7-8431fa62b484-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-tts96\" (UID: \"a7ce6c10-a999-4cd3-99b7-8431fa62b484\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tts96"
	Oct 17 19:41:01 old-k8s-version-907112 kubelet[723]: I1017 19:41:01.794283     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8prh\" (UniqueName: \"kubernetes.io/projected/d975038c-cb8d-4021-9882-0dd6334eb118-kube-api-access-z8prh\") pod \"kubernetes-dashboard-8694d4445c-lh28q\" (UID: \"d975038c-cb8d-4021-9882-0dd6334eb118\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lh28q"
	Oct 17 19:41:01 old-k8s-version-907112 kubelet[723]: I1017 19:41:01.794351     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d975038c-cb8d-4021-9882-0dd6334eb118-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-lh28q\" (UID: \"d975038c-cb8d-4021-9882-0dd6334eb118\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lh28q"
	Oct 17 19:41:01 old-k8s-version-907112 kubelet[723]: I1017 19:41:01.794438     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8fhz\" (UniqueName: \"kubernetes.io/projected/a7ce6c10-a999-4cd3-99b7-8431fa62b484-kube-api-access-c8fhz\") pod \"dashboard-metrics-scraper-5f989dc9cf-tts96\" (UID: \"a7ce6c10-a999-4cd3-99b7-8431fa62b484\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tts96"
	Oct 17 19:41:04 old-k8s-version-907112 kubelet[723]: I1017 19:41:04.411490     723 scope.go:117] "RemoveContainer" containerID="6631f01e880325aca48a0dffba57c8b2f1fdba2215eaca6c635aa77ef4e0cb73"
	Oct 17 19:41:05 old-k8s-version-907112 kubelet[723]: I1017 19:41:05.416076     723 scope.go:117] "RemoveContainer" containerID="6631f01e880325aca48a0dffba57c8b2f1fdba2215eaca6c635aa77ef4e0cb73"
	Oct 17 19:41:05 old-k8s-version-907112 kubelet[723]: I1017 19:41:05.416381     723 scope.go:117] "RemoveContainer" containerID="8df50d229107da72795fef3da5005c971bfdcb1b111d146da91ee87a2df81325"
	Oct 17 19:41:05 old-k8s-version-907112 kubelet[723]: E1017 19:41:05.416829     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-tts96_kubernetes-dashboard(a7ce6c10-a999-4cd3-99b7-8431fa62b484)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tts96" podUID="a7ce6c10-a999-4cd3-99b7-8431fa62b484"
	Oct 17 19:41:06 old-k8s-version-907112 kubelet[723]: I1017 19:41:06.420300     723 scope.go:117] "RemoveContainer" containerID="8df50d229107da72795fef3da5005c971bfdcb1b111d146da91ee87a2df81325"
	Oct 17 19:41:06 old-k8s-version-907112 kubelet[723]: E1017 19:41:06.420556     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-tts96_kubernetes-dashboard(a7ce6c10-a999-4cd3-99b7-8431fa62b484)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tts96" podUID="a7ce6c10-a999-4cd3-99b7-8431fa62b484"
	Oct 17 19:41:10 old-k8s-version-907112 kubelet[723]: I1017 19:41:10.442101     723 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lh28q" podStartSLOduration=1.6118768559999999 podCreationTimestamp="2025-10-17 19:41:01 +0000 UTC" firstStartedPulling="2025-10-17 19:41:02.038292185 +0000 UTC m=+16.807636659" lastFinishedPulling="2025-10-17 19:41:09.868452156 +0000 UTC m=+24.637796635" observedRunningTime="2025-10-17 19:41:10.441633286 +0000 UTC m=+25.210977775" watchObservedRunningTime="2025-10-17 19:41:10.442036832 +0000 UTC m=+25.211381322"
	Oct 17 19:41:12 old-k8s-version-907112 kubelet[723]: I1017 19:41:12.013771     723 scope.go:117] "RemoveContainer" containerID="8df50d229107da72795fef3da5005c971bfdcb1b111d146da91ee87a2df81325"
	Oct 17 19:41:12 old-k8s-version-907112 kubelet[723]: E1017 19:41:12.014073     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-tts96_kubernetes-dashboard(a7ce6c10-a999-4cd3-99b7-8431fa62b484)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tts96" podUID="a7ce6c10-a999-4cd3-99b7-8431fa62b484"
	Oct 17 19:41:20 old-k8s-version-907112 kubelet[723]: I1017 19:41:20.458837     723 scope.go:117] "RemoveContainer" containerID="1f22d5826138c6ffba6839da3f8f7c8bad03751a7d19957f7e94844c9d6c7fbf"
	Oct 17 19:41:25 old-k8s-version-907112 kubelet[723]: I1017 19:41:25.325097     723 scope.go:117] "RemoveContainer" containerID="8df50d229107da72795fef3da5005c971bfdcb1b111d146da91ee87a2df81325"
	Oct 17 19:41:25 old-k8s-version-907112 kubelet[723]: I1017 19:41:25.476570     723 scope.go:117] "RemoveContainer" containerID="8df50d229107da72795fef3da5005c971bfdcb1b111d146da91ee87a2df81325"
	Oct 17 19:41:25 old-k8s-version-907112 kubelet[723]: I1017 19:41:25.476828     723 scope.go:117] "RemoveContainer" containerID="da0f545a9585922305e1f2b72786c36437454935dc6843626a40eaddd980c678"
	Oct 17 19:41:25 old-k8s-version-907112 kubelet[723]: E1017 19:41:25.477244     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-tts96_kubernetes-dashboard(a7ce6c10-a999-4cd3-99b7-8431fa62b484)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tts96" podUID="a7ce6c10-a999-4cd3-99b7-8431fa62b484"
	Oct 17 19:41:32 old-k8s-version-907112 kubelet[723]: I1017 19:41:32.014515     723 scope.go:117] "RemoveContainer" containerID="da0f545a9585922305e1f2b72786c36437454935dc6843626a40eaddd980c678"
	Oct 17 19:41:32 old-k8s-version-907112 kubelet[723]: E1017 19:41:32.014865     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-tts96_kubernetes-dashboard(a7ce6c10-a999-4cd3-99b7-8431fa62b484)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tts96" podUID="a7ce6c10-a999-4cd3-99b7-8431fa62b484"
	Oct 17 19:41:34 old-k8s-version-907112 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 19:41:34 old-k8s-version-907112 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 19:41:34 old-k8s-version-907112 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 17 19:41:34 old-k8s-version-907112 systemd[1]: kubelet.service: Consumed 1.552s CPU time.
	
	
	==> kubernetes-dashboard [e12315128e22de98b736c6a0aef19edd3e650649a5ea832a8c589ed2015cd1d4] <==
	2025/10/17 19:41:09 Using namespace: kubernetes-dashboard
	2025/10/17 19:41:09 Using in-cluster config to connect to apiserver
	2025/10/17 19:41:09 Using secret token for csrf signing
	2025/10/17 19:41:09 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 19:41:09 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 19:41:09 Successful initial request to the apiserver, version: v1.28.0
	2025/10/17 19:41:09 Generating JWE encryption key
	2025/10/17 19:41:09 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 19:41:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 19:41:10 Initializing JWE encryption key from synchronized object
	2025/10/17 19:41:10 Creating in-cluster Sidecar client
	2025/10/17 19:41:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 19:41:10 Serving insecurely on HTTP port: 9090
	2025/10/17 19:41:09 Starting overwatch
	
	
	==> storage-provisioner [1f22d5826138c6ffba6839da3f8f7c8bad03751a7d19957f7e94844c9d6c7fbf] <==
	I1017 19:40:49.734135       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 19:41:19.737141       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ca3002e51fbb7b46eb280826e262993a5bea288bfe8287e1a0d672392d3182f5] <==
	I1017 19:41:20.510143       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 19:41:20.521396       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 19:41:20.521445       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-907112 -n old-k8s-version-907112
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-907112 -n old-k8s-version-907112: exit status 2 (398.780503ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-907112 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-907112
helpers_test.go:243: (dbg) docker inspect old-k8s-version-907112:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c9e45391db92dcb1aa794e584027e0ef5db54bd40162863d9ac544d6e17efe69",
	        "Created": "2025-10-17T19:39:28.47315274Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 731223,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:40:38.76293349Z",
	            "FinishedAt": "2025-10-17T19:40:37.771398372Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/c9e45391db92dcb1aa794e584027e0ef5db54bd40162863d9ac544d6e17efe69/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c9e45391db92dcb1aa794e584027e0ef5db54bd40162863d9ac544d6e17efe69/hostname",
	        "HostsPath": "/var/lib/docker/containers/c9e45391db92dcb1aa794e584027e0ef5db54bd40162863d9ac544d6e17efe69/hosts",
	        "LogPath": "/var/lib/docker/containers/c9e45391db92dcb1aa794e584027e0ef5db54bd40162863d9ac544d6e17efe69/c9e45391db92dcb1aa794e584027e0ef5db54bd40162863d9ac544d6e17efe69-json.log",
	        "Name": "/old-k8s-version-907112",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-907112:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-907112",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c9e45391db92dcb1aa794e584027e0ef5db54bd40162863d9ac544d6e17efe69",
	                "LowerDir": "/var/lib/docker/overlay2/e6c30ea89b09c5e82cbf480acc38ef16124ed01036b47190b5890c66fdac61c3-init/diff:/var/lib/docker/overlay2/dbfb6a42e05d15debefb7c829b0dbabbe558b70da40f1ab4f30d27e0dda96088/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e6c30ea89b09c5e82cbf480acc38ef16124ed01036b47190b5890c66fdac61c3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e6c30ea89b09c5e82cbf480acc38ef16124ed01036b47190b5890c66fdac61c3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e6c30ea89b09c5e82cbf480acc38ef16124ed01036b47190b5890c66fdac61c3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-907112",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-907112/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-907112",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-907112",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-907112",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3d1befa4b40df50277f98cf930ec271cf0c416396cf6c083ddbeb7267616502c",
	            "SandboxKey": "/var/run/docker/netns/3d1befa4b40d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-907112": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:f9:3e:66:08:ba",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e97054581b64b00fcec9937bf013cc1657d289bfdedb4be6f078111f0c49299",
	                    "EndpointID": "9c66822723f974e4078495ccf27cd977cdf77f297268b6dda52691e36f33896d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-907112",
	                        "c9e45391db92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-907112 -n old-k8s-version-907112
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-907112 -n old-k8s-version-907112: exit status 2 (410.773596ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-907112 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-907112 logs -n 25: (1.274080818s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-448344 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ ssh     │ -p cilium-448344 sudo crio config                                                                                                                                                                                                             │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ delete  │ -p cilium-448344                                                                                                                                                                                                                              │ cilium-448344          │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:39 UTC │
	│ start   │ -p old-k8s-version-907112 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:40 UTC │
	│ start   │ -p pause-022753 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-022753           │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:39 UTC │
	│ pause   │ -p pause-022753 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-022753           │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │                     │
	│ delete  │ -p pause-022753                                                                                                                                                                                                                               │ pause-022753           │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:39 UTC │
	│ start   │ -p no-preload-171807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-171807      │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:40 UTC │
	│ start   │ -p cert-expiration-141205 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-141205 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ delete  │ -p cert-expiration-141205                                                                                                                                                                                                                     │ cert-expiration-141205 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-907112 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │                     │
	│ start   │ -p embed-certs-599709 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-599709     │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:41 UTC │
	│ stop    │ -p old-k8s-version-907112 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-907112 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ start   │ -p old-k8s-version-907112 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable metrics-server -p no-preload-171807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-171807      │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │                     │
	│ stop    │ -p no-preload-171807 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-171807      │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p no-preload-171807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-171807      │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-171807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-171807      │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-599709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-599709     │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	│ stop    │ -p embed-certs-599709 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-599709     │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p embed-certs-599709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-599709     │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ start   │ -p embed-certs-599709 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-599709     │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	│ image   │ old-k8s-version-907112 image list --format=json                                                                                                                                                                                               │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ pause   │ -p old-k8s-version-907112 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-907112 │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:41:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:41:28.776903  741107 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:41:28.777152  741107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:41:28.777161  741107 out.go:374] Setting ErrFile to fd 2...
	I1017 19:41:28.777165  741107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:41:28.777345  741107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:41:28.777840  741107 out.go:368] Setting JSON to false
	I1017 19:41:28.779161  741107 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12228,"bootTime":1760717861,"procs":337,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:41:28.779267  741107 start.go:141] virtualization: kvm guest
	I1017 19:41:28.781460  741107 out.go:179] * [embed-certs-599709] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:41:28.782804  741107 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:41:28.782825  741107 notify.go:220] Checking for updates...
	I1017 19:41:28.785149  741107 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:41:28.786410  741107 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:41:28.787859  741107 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 19:41:28.789143  741107 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:41:28.790495  741107 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:41:28.792215  741107 config.go:182] Loaded profile config "embed-certs-599709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:41:28.792743  741107 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:41:28.817575  741107 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:41:28.817715  741107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:41:28.878339  741107 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 19:41:28.868171722 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:41:28.878459  741107 docker.go:318] overlay module found
	I1017 19:41:28.880360  741107 out.go:179] * Using the docker driver based on existing profile
	I1017 19:41:28.881537  741107 start.go:305] selected driver: docker
	I1017 19:41:28.881558  741107 start.go:925] validating driver "docker" against &{Name:embed-certs-599709 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-599709 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:41:28.881720  741107 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:41:28.882448  741107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:41:28.943349  741107 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 19:41:28.932946695 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:41:28.943767  741107 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:41:28.943811  741107 cni.go:84] Creating CNI manager for ""
	I1017 19:41:28.943874  741107 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:41:28.943932  741107 start.go:349] cluster config:
	{Name:embed-certs-599709 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-599709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:41:28.945883  741107 out.go:179] * Starting "embed-certs-599709" primary control-plane node in "embed-certs-599709" cluster
	I1017 19:41:28.947266  741107 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:41:28.948607  741107 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:41:28.949821  741107 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:41:28.949877  741107 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:41:28.949890  741107 cache.go:58] Caching tarball of preloaded images
	I1017 19:41:28.949937  741107 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:41:28.950025  741107 preload.go:233] Found /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:41:28.950041  741107 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:41:28.950163  741107 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/embed-certs-599709/config.json ...
	I1017 19:41:28.971852  741107 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:41:28.971884  741107 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:41:28.971903  741107 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:41:28.971935  741107 start.go:360] acquireMachinesLock for embed-certs-599709: {Name:mk6d9d5bfeac18abd5031b01da957aa047e89617 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:41:28.972014  741107 start.go:364] duration metric: took 53.276µs to acquireMachinesLock for "embed-certs-599709"
	I1017 19:41:28.972040  741107 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:41:28.972051  741107 fix.go:54] fixHost starting: 
	I1017 19:41:28.972372  741107 cli_runner.go:164] Run: docker container inspect embed-certs-599709 --format={{.State.Status}}
	I1017 19:41:28.990663  741107 fix.go:112] recreateIfNeeded on embed-certs-599709: state=Stopped err=<nil>
	W1017 19:41:28.990709  741107 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:41:27.730829  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:41:27.731318  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:41:27.731396  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:41:27.731466  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:41:27.762535  696997 cri.go:89] found id: "5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:27.762558  696997 cri.go:89] found id: ""
	I1017 19:41:27.762570  696997 logs.go:282] 1 containers: [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690]
	I1017 19:41:27.762628  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:27.767925  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:41:27.768002  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:41:27.805962  696997 cri.go:89] found id: ""
	I1017 19:41:27.805990  696997 logs.go:282] 0 containers: []
	W1017 19:41:27.805999  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:41:27.806006  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:41:27.806093  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:41:27.836319  696997 cri.go:89] found id: ""
	I1017 19:41:27.836345  696997 logs.go:282] 0 containers: []
	W1017 19:41:27.836353  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:41:27.836359  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:41:27.836406  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:41:27.865606  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:27.865640  696997 cri.go:89] found id: ""
	I1017 19:41:27.865652  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:41:27.865750  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:27.870036  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:41:27.870110  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:41:27.900178  696997 cri.go:89] found id: ""
	I1017 19:41:27.900206  696997 logs.go:282] 0 containers: []
	W1017 19:41:27.900219  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:41:27.900227  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:41:27.900302  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:41:27.930323  696997 cri.go:89] found id: "ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:27.930347  696997 cri.go:89] found id: ""
	I1017 19:41:27.930355  696997 logs.go:282] 1 containers: [ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770]
	I1017 19:41:27.930403  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:27.934773  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:41:27.934852  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:41:27.963911  696997 cri.go:89] found id: ""
	I1017 19:41:27.963939  696997 logs.go:282] 0 containers: []
	W1017 19:41:27.963948  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:41:27.963954  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:41:27.964017  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:41:27.995252  696997 cri.go:89] found id: ""
	I1017 19:41:27.995282  696997 logs.go:282] 0 containers: []
	W1017 19:41:27.995293  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:41:27.995303  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:41:27.995319  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:28.055976  696997 logs.go:123] Gathering logs for kube-controller-manager [ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770] ...
	I1017 19:41:28.056017  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:28.091298  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:41:28.091324  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:41:28.163533  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:41:28.163571  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:41:28.198974  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:41:28.199010  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:41:28.289862  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:41:28.289901  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:41:28.309874  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:41:28.309910  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:41:28.374759  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:41:28.374779  696997 logs.go:123] Gathering logs for kube-apiserver [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690] ...
	I1017 19:41:28.374797  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:30.911755  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:41:30.912217  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:41:30.912268  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:41:30.912326  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:41:30.941583  696997 cri.go:89] found id: "5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:30.941612  696997 cri.go:89] found id: ""
	I1017 19:41:30.941622  696997 logs.go:282] 1 containers: [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690]
	I1017 19:41:30.941711  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:30.945946  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:41:30.946016  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:41:30.974828  696997 cri.go:89] found id: ""
	I1017 19:41:30.974862  696997 logs.go:282] 0 containers: []
	W1017 19:41:30.974875  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:41:30.974883  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:41:30.974947  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:41:31.004810  696997 cri.go:89] found id: ""
	I1017 19:41:31.004843  696997 logs.go:282] 0 containers: []
	W1017 19:41:31.004852  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:41:31.004858  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:41:31.004919  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:41:31.033736  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:31.033760  696997 cri.go:89] found id: ""
	I1017 19:41:31.033769  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:41:31.033835  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:31.038103  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:41:31.038163  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:41:31.066190  696997 cri.go:89] found id: ""
	I1017 19:41:31.066214  696997 logs.go:282] 0 containers: []
	W1017 19:41:31.066223  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:41:31.066229  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:41:31.066280  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:41:31.095424  696997 cri.go:89] found id: "ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:31.095449  696997 cri.go:89] found id: ""
	I1017 19:41:31.095457  696997 logs.go:282] 1 containers: [ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770]
	I1017 19:41:31.095517  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:31.099839  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:41:31.099906  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:41:31.129871  696997 cri.go:89] found id: ""
	I1017 19:41:31.129895  696997 logs.go:282] 0 containers: []
	W1017 19:41:31.129904  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:41:31.129909  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:41:31.129970  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:41:31.158078  696997 cri.go:89] found id: ""
	I1017 19:41:31.158112  696997 logs.go:282] 0 containers: []
	W1017 19:41:31.158124  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:41:31.158136  696997 logs.go:123] Gathering logs for kube-apiserver [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690] ...
	I1017 19:41:31.158154  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:31.192270  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:41:31.192317  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:31.247618  696997 logs.go:123] Gathering logs for kube-controller-manager [ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770] ...
	I1017 19:41:31.247657  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:31.278045  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:41:31.278078  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:41:31.333379  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:41:31.333420  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:41:31.368614  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:41:31.368647  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:41:31.461595  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:41:31.461641  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:41:31.480327  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:41:31.480363  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:41:31.540809  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1017 19:41:29.589301  736846 pod_ready.go:104] pod "coredns-66bc5c9577-gnx5k" is not "Ready", error: <nil>
	W1017 19:41:32.088925  736846 pod_ready.go:104] pod "coredns-66bc5c9577-gnx5k" is not "Ready", error: <nil>
	I1017 19:41:28.992499  741107 out.go:252] * Restarting existing docker container for "embed-certs-599709" ...
	I1017 19:41:28.992601  741107 cli_runner.go:164] Run: docker start embed-certs-599709
	I1017 19:41:29.244135  741107 cli_runner.go:164] Run: docker container inspect embed-certs-599709 --format={{.State.Status}}
	I1017 19:41:29.264188  741107 kic.go:430] container "embed-certs-599709" state is running.
	I1017 19:41:29.264598  741107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-599709
	I1017 19:41:29.283671  741107 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/embed-certs-599709/config.json ...
	I1017 19:41:29.283916  741107 machine.go:93] provisionDockerMachine start ...
	I1017 19:41:29.283986  741107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-599709
	I1017 19:41:29.302120  741107 main.go:141] libmachine: Using SSH client type: native
	I1017 19:41:29.302403  741107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1017 19:41:29.302419  741107 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:41:29.303148  741107 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60450->127.0.0.1:33448: read: connection reset by peer
	I1017 19:41:32.440087  741107 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-599709
	
	I1017 19:41:32.440121  741107 ubuntu.go:182] provisioning hostname "embed-certs-599709"
	I1017 19:41:32.440181  741107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-599709
	I1017 19:41:32.458980  741107 main.go:141] libmachine: Using SSH client type: native
	I1017 19:41:32.459294  741107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1017 19:41:32.459317  741107 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-599709 && echo "embed-certs-599709" | sudo tee /etc/hostname
	I1017 19:41:32.605924  741107 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-599709
	
	I1017 19:41:32.606028  741107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-599709
	I1017 19:41:32.625252  741107 main.go:141] libmachine: Using SSH client type: native
	I1017 19:41:32.625481  741107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1017 19:41:32.625498  741107 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-599709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-599709/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-599709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:41:32.764947  741107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:41:32.764983  741107 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-492109/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-492109/.minikube}
	I1017 19:41:32.765029  741107 ubuntu.go:190] setting up certificates
	I1017 19:41:32.765047  741107 provision.go:84] configureAuth start
	I1017 19:41:32.765112  741107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-599709
	I1017 19:41:32.784192  741107 provision.go:143] copyHostCerts
	I1017 19:41:32.784263  741107 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem, removing ...
	I1017 19:41:32.784285  741107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem
	I1017 19:41:32.784403  741107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem (1078 bytes)
	I1017 19:41:32.784553  741107 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem, removing ...
	I1017 19:41:32.784567  741107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem
	I1017 19:41:32.784608  741107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem (1123 bytes)
	I1017 19:41:32.784718  741107 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem, removing ...
	I1017 19:41:32.784730  741107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem
	I1017 19:41:32.784768  741107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem (1679 bytes)
	I1017 19:41:32.784843  741107 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem org=jenkins.embed-certs-599709 san=[127.0.0.1 192.168.94.2 embed-certs-599709 localhost minikube]
	I1017 19:41:33.303638  741107 provision.go:177] copyRemoteCerts
	I1017 19:41:33.303715  741107 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:41:33.303776  741107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-599709
	I1017 19:41:33.323063  741107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/embed-certs-599709/id_rsa Username:docker}
	I1017 19:41:33.422113  741107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 19:41:33.441379  741107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 19:41:33.461030  741107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:41:33.480601  741107 provision.go:87] duration metric: took 715.533515ms to configureAuth
	I1017 19:41:33.480633  741107 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:41:33.480869  741107 config.go:182] Loaded profile config "embed-certs-599709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:41:33.481010  741107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-599709
	I1017 19:41:33.500229  741107 main.go:141] libmachine: Using SSH client type: native
	I1017 19:41:33.500459  741107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1017 19:41:33.500475  741107 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:41:33.804986  741107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:41:33.805017  741107 machine.go:96] duration metric: took 4.521082611s to provisionDockerMachine
	I1017 19:41:33.805033  741107 start.go:293] postStartSetup for "embed-certs-599709" (driver="docker")
	I1017 19:41:33.805100  741107 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:41:33.805176  741107 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:41:33.805248  741107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-599709
	I1017 19:41:33.824818  741107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/embed-certs-599709/id_rsa Username:docker}
	I1017 19:41:33.925249  741107 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:41:33.929521  741107 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:41:33.929551  741107 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:41:33.929564  741107 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/addons for local assets ...
	I1017 19:41:33.929620  741107 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/files for local assets ...
	I1017 19:41:33.929731  741107 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem -> 4957252.pem in /etc/ssl/certs
	I1017 19:41:33.929836  741107 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:41:33.938676  741107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem --> /etc/ssl/certs/4957252.pem (1708 bytes)
	I1017 19:41:33.958966  741107 start.go:296] duration metric: took 153.913149ms for postStartSetup
	I1017 19:41:33.959079  741107 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:41:33.959139  741107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-599709
	I1017 19:41:33.980109  741107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/embed-certs-599709/id_rsa Username:docker}
	I1017 19:41:34.080819  741107 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:41:34.086226  741107 fix.go:56] duration metric: took 5.114167639s for fixHost
	I1017 19:41:34.086267  741107 start.go:83] releasing machines lock for "embed-certs-599709", held for 5.114226851s
	I1017 19:41:34.086343  741107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-599709
	I1017 19:41:34.107404  741107 ssh_runner.go:195] Run: cat /version.json
	I1017 19:41:34.107468  741107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-599709
	I1017 19:41:34.107540  741107 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:41:34.107625  741107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-599709
	I1017 19:41:34.128715  741107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/embed-certs-599709/id_rsa Username:docker}
	I1017 19:41:34.130209  741107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/embed-certs-599709/id_rsa Username:docker}
	I1017 19:41:34.231374  741107 ssh_runner.go:195] Run: systemctl --version
	I1017 19:41:34.315600  741107 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:41:34.365925  741107 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:41:34.372122  741107 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:41:34.372191  741107 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:41:34.383150  741107 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:41:34.383176  741107 start.go:495] detecting cgroup driver to use...
	I1017 19:41:34.383208  741107 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 19:41:34.383261  741107 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:41:34.402515  741107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:41:34.417356  741107 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:41:34.417437  741107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:41:34.434531  741107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:41:34.450897  741107 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:41:34.557072  741107 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:41:34.660030  741107 docker.go:234] disabling docker service ...
	I1017 19:41:34.660143  741107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:41:34.678438  741107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:41:34.694967  741107 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:41:34.793751  741107 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:41:34.881231  741107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:41:34.894850  741107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:41:34.911057  741107 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:41:34.911126  741107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:34.922554  741107 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 19:41:34.922649  741107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:34.933539  741107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:34.943614  741107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:34.953691  741107 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:41:34.962783  741107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:34.972580  741107 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:34.982851  741107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:34.993148  741107 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:41:35.002270  741107 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:41:35.011593  741107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:41:35.103087  741107 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:41:35.221522  741107 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:41:35.221591  741107 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:41:35.226039  741107 start.go:563] Will wait 60s for crictl version
	I1017 19:41:35.226117  741107 ssh_runner.go:195] Run: which crictl
	I1017 19:41:35.230524  741107 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:41:35.261310  741107 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:41:35.261398  741107 ssh_runner.go:195] Run: crio --version
	I1017 19:41:35.292143  741107 ssh_runner.go:195] Run: crio --version
	I1017 19:41:35.326171  741107 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:41:35.327526  741107 cli_runner.go:164] Run: docker network inspect embed-certs-599709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:41:35.345142  741107 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1017 19:41:35.349786  741107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:41:35.361290  741107 kubeadm.go:883] updating cluster {Name:embed-certs-599709 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-599709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:41:35.361437  741107 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:41:35.361499  741107 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:41:35.395300  741107 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:41:35.395328  741107 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:41:35.395375  741107 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:41:35.423675  741107 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:41:35.423726  741107 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:41:35.423737  741107 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1017 19:41:35.423861  741107 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-599709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-599709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:41:35.423943  741107 ssh_runner.go:195] Run: crio config
	I1017 19:41:35.472333  741107 cni.go:84] Creating CNI manager for ""
	I1017 19:41:35.472370  741107 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:41:35.472390  741107 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:41:35.472415  741107 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-599709 NodeName:embed-certs-599709 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:41:35.472547  741107 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-599709"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:41:35.472614  741107 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:41:35.481777  741107 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:41:35.481869  741107 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 19:41:35.490558  741107 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1017 19:41:35.504296  741107 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:41:35.518946  741107 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1017 19:41:35.535427  741107 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1017 19:41:35.540157  741107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:41:35.555635  741107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:41:35.637554  741107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:41:35.661618  741107 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/embed-certs-599709 for IP: 192.168.94.2
	I1017 19:41:35.661644  741107 certs.go:195] generating shared ca certs ...
	I1017 19:41:35.661668  741107 certs.go:227] acquiring lock for ca certs: {Name:mkc97483d62151ba5c32d923dd19e3e2b3661468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:41:35.661849  741107 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key
	I1017 19:41:35.661905  741107 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key
	I1017 19:41:35.661919  741107 certs.go:257] generating profile certs ...
	I1017 19:41:35.662059  741107 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/embed-certs-599709/client.key
	I1017 19:41:35.662146  741107 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/embed-certs-599709/apiserver.key.fbe8348c
	I1017 19:41:35.662199  741107 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/embed-certs-599709/proxy-client.key
	I1017 19:41:35.662357  741107 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725.pem (1338 bytes)
	W1017 19:41:35.662404  741107 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725_empty.pem, impossibly tiny 0 bytes
	I1017 19:41:35.662419  741107 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 19:41:35.662455  741107 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem (1078 bytes)
	I1017 19:41:35.662486  741107 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:41:35.662521  741107 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem (1679 bytes)
	I1017 19:41:35.662577  741107 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem (1708 bytes)
	I1017 19:41:35.663331  741107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:41:35.685020  741107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:41:35.708406  741107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:41:35.732249  741107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 19:41:35.759903  741107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/embed-certs-599709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1017 19:41:35.782588  741107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/embed-certs-599709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:41:35.804267  741107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/embed-certs-599709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:41:35.826205  741107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/embed-certs-599709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 19:41:35.846588  741107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem --> /usr/share/ca-certificates/4957252.pem (1708 bytes)
	I1017 19:41:35.869019  741107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:41:35.891659  741107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725.pem --> /usr/share/ca-certificates/495725.pem (1338 bytes)
	I1017 19:41:35.911955  741107 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:41:35.927382  741107 ssh_runner.go:195] Run: openssl version
	I1017 19:41:35.934632  741107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4957252.pem && ln -fs /usr/share/ca-certificates/4957252.pem /etc/ssl/certs/4957252.pem"
	I1017 19:41:35.944464  741107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4957252.pem
	I1017 19:41:35.949181  741107 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/4957252.pem
	I1017 19:41:35.949251  741107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4957252.pem
	I1017 19:41:35.986795  741107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4957252.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:41:35.996575  741107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:41:36.006140  741107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:41:36.010301  741107 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:41:36.010378  741107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:41:36.047781  741107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:41:36.058926  741107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/495725.pem && ln -fs /usr/share/ca-certificates/495725.pem /etc/ssl/certs/495725.pem"
	I1017 19:41:36.071123  741107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/495725.pem
	I1017 19:41:36.076654  741107 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/495725.pem
	I1017 19:41:36.076751  741107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/495725.pem
	I1017 19:41:36.118506  741107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/495725.pem /etc/ssl/certs/51391683.0"
	I1017 19:41:36.128068  741107 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:41:36.132600  741107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:41:36.171287  741107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:41:36.222365  741107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:41:36.283378  741107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:41:36.347789  741107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:41:36.396512  741107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:41:36.434901  741107 kubeadm.go:400] StartCluster: {Name:embed-certs-599709 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-599709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:41:36.435021  741107 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:41:36.435098  741107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:41:36.470016  741107 cri.go:89] found id: "9229cd3e223ec817b5885265f0c88a1b78735a34ba5f6a4b4723d3fee1cf4d34"
	I1017 19:41:36.470041  741107 cri.go:89] found id: "eeadd287c3bf74a34717467fb1adfa03126b04b4a20a9dd1ecd6ef8e5fa4c43a"
	I1017 19:41:36.470046  741107 cri.go:89] found id: "3320bb4791740d09b759229a773dc3c8b5f46f29bca00968f79441653fafafce"
	I1017 19:41:36.470051  741107 cri.go:89] found id: "eccf39ad86610aefaf8eaf41939eb4ad09f3ebbd9c6afbe871000f0047c47987"
	I1017 19:41:36.470055  741107 cri.go:89] found id: ""
	I1017 19:41:36.470108  741107 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 19:41:36.485287  741107 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:41:36Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:41:36.485361  741107 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 19:41:36.497028  741107 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 19:41:36.497048  741107 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 19:41:36.497096  741107 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 19:41:36.507324  741107 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:41:36.508189  741107 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-599709" does not appear in /home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:41:36.508912  741107 kubeconfig.go:62] /home/jenkins/minikube-integration/21753-492109/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-599709" cluster setting kubeconfig missing "embed-certs-599709" context setting]
	I1017 19:41:36.509939  741107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/kubeconfig: {Name:mkc99c1a086f83f30612e2820a6063c20b9217b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:41:36.512195  741107 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 19:41:36.523034  741107 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1017 19:41:36.523073  741107 kubeadm.go:601] duration metric: took 26.018792ms to restartPrimaryControlPlane
	I1017 19:41:36.523087  741107 kubeadm.go:402] duration metric: took 88.19908ms to StartCluster
	I1017 19:41:36.523114  741107 settings.go:142] acquiring lock: {Name:mkb8ebc6edbbb6915dd74086f502bcc2721555a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:41:36.523186  741107 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:41:36.525440  741107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/kubeconfig: {Name:mkc99c1a086f83f30612e2820a6063c20b9217b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:41:36.525787  741107 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:41:36.525921  741107 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 19:41:36.526032  741107 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-599709"
	I1017 19:41:36.526051  741107 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-599709"
	W1017 19:41:36.526060  741107 addons.go:247] addon storage-provisioner should already be in state true
	I1017 19:41:36.526072  741107 config.go:182] Loaded profile config "embed-certs-599709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:41:36.526095  741107 host.go:66] Checking if "embed-certs-599709" exists ...
	I1017 19:41:36.526118  741107 addons.go:69] Setting dashboard=true in profile "embed-certs-599709"
	I1017 19:41:36.526137  741107 addons.go:238] Setting addon dashboard=true in "embed-certs-599709"
	W1017 19:41:36.526145  741107 addons.go:247] addon dashboard should already be in state true
	I1017 19:41:36.526170  741107 host.go:66] Checking if "embed-certs-599709" exists ...
	I1017 19:41:36.526165  741107 addons.go:69] Setting default-storageclass=true in profile "embed-certs-599709"
	I1017 19:41:36.526191  741107 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-599709"
	I1017 19:41:36.526526  741107 cli_runner.go:164] Run: docker container inspect embed-certs-599709 --format={{.State.Status}}
	I1017 19:41:36.526788  741107 cli_runner.go:164] Run: docker container inspect embed-certs-599709 --format={{.State.Status}}
	I1017 19:41:36.526837  741107 cli_runner.go:164] Run: docker container inspect embed-certs-599709 --format={{.State.Status}}
	I1017 19:41:36.527901  741107 out.go:179] * Verifying Kubernetes components...
	I1017 19:41:36.529055  741107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:41:36.556570  741107 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1017 19:41:36.556767  741107 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 19:41:36.557913  741107 addons.go:238] Setting addon default-storageclass=true in "embed-certs-599709"
	W1017 19:41:36.557937  741107 addons.go:247] addon default-storageclass should already be in state true
	I1017 19:41:36.557969  741107 host.go:66] Checking if "embed-certs-599709" exists ...
	I1017 19:41:36.558275  741107 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:41:36.558295  741107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 19:41:36.558357  741107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-599709
	I1017 19:41:36.558437  741107 cli_runner.go:164] Run: docker container inspect embed-certs-599709 --format={{.State.Status}}
	I1017 19:41:36.564379  741107 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1017 19:41:34.041835  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:41:34.042293  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:41:34.042391  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:41:34.042462  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:41:34.075057  696997 cri.go:89] found id: "5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:34.075084  696997 cri.go:89] found id: ""
	I1017 19:41:34.075095  696997 logs.go:282] 1 containers: [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690]
	I1017 19:41:34.075169  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:34.079837  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:41:34.079909  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:41:34.113946  696997 cri.go:89] found id: ""
	I1017 19:41:34.113975  696997 logs.go:282] 0 containers: []
	W1017 19:41:34.113986  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:41:34.113995  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:41:34.114061  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:41:34.146079  696997 cri.go:89] found id: ""
	I1017 19:41:34.146113  696997 logs.go:282] 0 containers: []
	W1017 19:41:34.146138  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:41:34.146147  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:41:34.146211  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:41:34.177544  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:34.177571  696997 cri.go:89] found id: ""
	I1017 19:41:34.177582  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:41:34.177642  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:34.182377  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:41:34.182448  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:41:34.214758  696997 cri.go:89] found id: ""
	I1017 19:41:34.214790  696997 logs.go:282] 0 containers: []
	W1017 19:41:34.214802  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:41:34.214810  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:41:34.214874  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:41:34.247547  696997 cri.go:89] found id: "ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:34.247571  696997 cri.go:89] found id: ""
	I1017 19:41:34.247582  696997 logs.go:282] 1 containers: [ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770]
	I1017 19:41:34.247649  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:34.252595  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:41:34.252668  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:41:34.284209  696997 cri.go:89] found id: ""
	I1017 19:41:34.284236  696997 logs.go:282] 0 containers: []
	W1017 19:41:34.284247  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:41:34.284255  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:41:34.284320  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:41:34.315810  696997 cri.go:89] found id: ""
	I1017 19:41:34.315837  696997 logs.go:282] 0 containers: []
	W1017 19:41:34.315848  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:41:34.315861  696997 logs.go:123] Gathering logs for kube-controller-manager [ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770] ...
	I1017 19:41:34.315882  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:34.353588  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:41:34.353618  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:41:34.417534  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:41:34.417574  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:41:34.450100  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:41:34.450134  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:41:34.577765  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:41:34.577805  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:41:34.600639  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:41:34.600752  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:41:34.669453  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:41:34.669482  696997 logs.go:123] Gathering logs for kube-apiserver [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690] ...
	I1017 19:41:34.669498  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:34.705720  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:41:34.705753  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:37.276761  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:41:37.277236  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:41:37.277304  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:41:37.277360  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:41:37.316156  696997 cri.go:89] found id: "5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:37.316186  696997 cri.go:89] found id: ""
	I1017 19:41:37.316198  696997 logs.go:282] 1 containers: [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690]
	I1017 19:41:37.316269  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:37.321450  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:41:37.321523  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:41:37.357490  696997 cri.go:89] found id: ""
	I1017 19:41:37.357521  696997 logs.go:282] 0 containers: []
	W1017 19:41:37.357534  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:41:37.357542  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:41:37.357607  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:41:37.392600  696997 cri.go:89] found id: ""
	I1017 19:41:37.392715  696997 logs.go:282] 0 containers: []
	W1017 19:41:37.392727  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:41:37.392737  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:41:37.392810  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1017 19:41:34.090066  736846 pod_ready.go:104] pod "coredns-66bc5c9577-gnx5k" is not "Ready", error: <nil>
	W1017 19:41:36.090879  736846 pod_ready.go:104] pod "coredns-66bc5c9577-gnx5k" is not "Ready", error: <nil>
	W1017 19:41:38.091407  736846 pod_ready.go:104] pod "coredns-66bc5c9577-gnx5k" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 17 19:41:09 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:09.905330233Z" level=info msg="Created container e12315128e22de98b736c6a0aef19edd3e650649a5ea832a8c589ed2015cd1d4: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lh28q/kubernetes-dashboard" id=2a470f01-d498-49ce-a078-1afa99c106d3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:41:09 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:09.906136811Z" level=info msg="Starting container: e12315128e22de98b736c6a0aef19edd3e650649a5ea832a8c589ed2015cd1d4" id=5dffc633-55b8-4724-827b-f2dadda0ed67 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:41:09 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:09.908215313Z" level=info msg="Started container" PID=1719 containerID=e12315128e22de98b736c6a0aef19edd3e650649a5ea832a8c589ed2015cd1d4 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lh28q/kubernetes-dashboard id=5dffc633-55b8-4724-827b-f2dadda0ed67 name=/runtime.v1.RuntimeService/StartContainer sandboxID=93b5b8e78ff7347400650d49c608c4a2f23de482161e8a32d8f920b111f9e51a
	Oct 17 19:41:20 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:20.459272835Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cdad3b11-21c3-431f-bf92-9b346d1159c8 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:41:20 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:20.460193198Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cdbc5241-b976-4556-bcd1-c2ca242f437e name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:41:20 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:20.461306327Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2292ff59-d379-4be0-ac66-8788cdd692fd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:41:20 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:20.461772005Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:20 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:20.466549321Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:20 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:20.466788486Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e37da2b9f2ee2c8a48cb7b4f60d856043fbe671dab1c590ec763e010abca1ddb/merged/etc/passwd: no such file or directory"
	Oct 17 19:41:20 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:20.466825142Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e37da2b9f2ee2c8a48cb7b4f60d856043fbe671dab1c590ec763e010abca1ddb/merged/etc/group: no such file or directory"
	Oct 17 19:41:20 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:20.467070516Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:20 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:20.494914691Z" level=info msg="Created container ca3002e51fbb7b46eb280826e262993a5bea288bfe8287e1a0d672392d3182f5: kube-system/storage-provisioner/storage-provisioner" id=2292ff59-d379-4be0-ac66-8788cdd692fd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:41:20 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:20.495627677Z" level=info msg="Starting container: ca3002e51fbb7b46eb280826e262993a5bea288bfe8287e1a0d672392d3182f5" id=74c0aaf9-6256-44cd-b6e1-f077c17239af name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:41:20 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:20.497579131Z" level=info msg="Started container" PID=1741 containerID=ca3002e51fbb7b46eb280826e262993a5bea288bfe8287e1a0d672392d3182f5 description=kube-system/storage-provisioner/storage-provisioner id=74c0aaf9-6256-44cd-b6e1-f077c17239af name=/runtime.v1.RuntimeService/StartContainer sandboxID=06574d67d4e7aab54c033b21728c1fc3206f0d091acd9d3d46e1e7de09d11549
	Oct 17 19:41:25 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:25.325811022Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=62e653b5-2d86-4f77-b726-1f3c6a2a84b6 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:41:25 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:25.326786222Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=163f7e02-6bb0-4201-8ccf-74aaa783cdff name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:41:25 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:25.327878114Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tts96/dashboard-metrics-scraper" id=f620a999-0987-4392-b35c-ca5be8b61618 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:41:25 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:25.328138781Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:25 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:25.336201634Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:25 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:25.336923186Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:25 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:25.373749736Z" level=info msg="Created container da0f545a9585922305e1f2b72786c36437454935dc6843626a40eaddd980c678: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tts96/dashboard-metrics-scraper" id=f620a999-0987-4392-b35c-ca5be8b61618 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:41:25 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:25.374473128Z" level=info msg="Starting container: da0f545a9585922305e1f2b72786c36437454935dc6843626a40eaddd980c678" id=5deb8571-9af1-4c84-ab09-86539bb13b80 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:41:25 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:25.37681392Z" level=info msg="Started container" PID=1778 containerID=da0f545a9585922305e1f2b72786c36437454935dc6843626a40eaddd980c678 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tts96/dashboard-metrics-scraper id=5deb8571-9af1-4c84-ab09-86539bb13b80 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7417532cdac64402fec20ca62dce6f73e115d6b5bb2773dbbc0ad33430799d35
	Oct 17 19:41:25 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:25.47874618Z" level=info msg="Removing container: 8df50d229107da72795fef3da5005c971bfdcb1b111d146da91ee87a2df81325" id=78556f42-dc15-48e1-a170-2bd9250786d9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:41:25 old-k8s-version-907112 crio[562]: time="2025-10-17T19:41:25.490594745Z" level=info msg="Removed container 8df50d229107da72795fef3da5005c971bfdcb1b111d146da91ee87a2df81325: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tts96/dashboard-metrics-scraper" id=78556f42-dc15-48e1-a170-2bd9250786d9 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	da0f545a95859       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   2                   7417532cdac64       dashboard-metrics-scraper-5f989dc9cf-tts96       kubernetes-dashboard
	ca3002e51fbb7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   06574d67d4e7a       storage-provisioner                              kube-system
	e12315128e22d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   29 seconds ago      Running             kubernetes-dashboard        0                   93b5b8e78ff73       kubernetes-dashboard-8694d4445c-lh28q            kubernetes-dashboard
	092bce2982d1a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   30f4db7df0775       busybox                                          default
	850f097d87c9e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           49 seconds ago      Running             coredns                     0                   3b5f720bf5a6c       coredns-5dd5756b68-gnqx4                         kube-system
	1f22d5826138c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   06574d67d4e7a       storage-provisioner                              kube-system
	322480c43ff27       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   41ba6ea90c7d3       kindnet-2zq9g                                    kube-system
	52ccc49f9b576       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           49 seconds ago      Running             kube-proxy                  0                   b7fe4409fef94       kube-proxy-lzbjz                                 kube-system
	054c0ba11919a       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           53 seconds ago      Running             kube-apiserver              0                   1f785209d980f       kube-apiserver-old-k8s-version-907112            kube-system
	6f75954cb9769       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           53 seconds ago      Running             etcd                        0                   2fe6486f7e597       etcd-old-k8s-version-907112                      kube-system
	059b93c2a1d4e       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           53 seconds ago      Running             kube-controller-manager     0                   80f9b80fdd8c1       kube-controller-manager-old-k8s-version-907112   kube-system
	0aa671be2daa8       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           53 seconds ago      Running             kube-scheduler              0                   2f372e555ac9e       kube-scheduler-old-k8s-version-907112            kube-system
	
	
	==> coredns [850f097d87c9ee81fbb9873f23093120c53509fa1c290e387feea69404395a62] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45391 - 59905 "HINFO IN 3593438954215795362.5115465353772403331. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.066332796s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-907112
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-907112
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=old-k8s-version-907112
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_39_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:39:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-907112
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:41:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:41:19 +0000   Fri, 17 Oct 2025 19:39:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:41:19 +0000   Fri, 17 Oct 2025 19:39:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:41:19 +0000   Fri, 17 Oct 2025 19:39:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:41:19 +0000   Fri, 17 Oct 2025 19:40:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-907112
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                b9d63c36-87df-4fe2-81c2-a81cd9f5ae31
	  Boot ID:                    c8616e78-d085-41cd-a329-f2bcfd9cfa15
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-5dd5756b68-gnqx4                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-old-k8s-version-907112                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-2zq9g                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-old-k8s-version-907112             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-907112    200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-lzbjz                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-old-k8s-version-907112             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-tts96        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-lh28q             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 49s                kube-proxy       
	  Normal  NodeHasSufficientMemory  117s               kubelet          Node old-k8s-version-907112 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s               kubelet          Node old-k8s-version-907112 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s               kubelet          Node old-k8s-version-907112 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s               node-controller  Node old-k8s-version-907112 event: Registered Node old-k8s-version-907112 in Controller
	  Normal  NodeReady                91s                kubelet          Node old-k8s-version-907112 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node old-k8s-version-907112 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node old-k8s-version-907112 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node old-k8s-version-907112 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                node-controller  Node old-k8s-version-907112 event: Registered Node old-k8s-version-907112 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 d1 49 91 03 c2 08 06
	[  +0.000804] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 a9 2b 44 da ae 08 06
	[Oct17 18:59] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022229] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023876] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.024898] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +2.047801] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +4.031525] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:00] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +16.382262] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +32.252567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	
	
	==> etcd [6f75954cb97693039a7a28b7e532c1cda8aaba2ac4c24c3d853c709e351d3c90] <==
	{"level":"info","ts":"2025-10-17T19:40:45.912289Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-17T19:40:45.912406Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T19:40:45.912436Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T19:40:45.912424Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-17T19:40:45.912481Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-17T19:40:45.912492Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-17T19:40:45.914723Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-17T19:40:45.91478Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-17T19:40:45.914799Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-17T19:40:45.914984Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-17T19:40:45.915019Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-17T19:40:47.601127Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-17T19:40:47.601183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-17T19:40:47.60123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-17T19:40:47.601248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-17T19:40:47.601256Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-17T19:40:47.601279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-17T19:40:47.601288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-17T19:40:47.603153Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T19:40:47.603157Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-907112 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-17T19:40:47.604396Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T19:40:47.604602Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-17T19:40:47.605662Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-17T19:40:47.606358Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-17T19:40:47.60638Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:41:39 up  3:23,  0 user,  load average: 3.23, 3.17, 2.04
	Linux old-k8s-version-907112 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [322480c43ff27fa7f365721afe1c5e3daaa5de2dc117b038c5cef04c9f210e44] <==
	I1017 19:40:49.936082       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 19:40:49.936567       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 19:40:49.936807       1 main.go:148] setting mtu 1500 for CNI 
	I1017 19:40:49.936827       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 19:40:49.936852       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T19:40:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 19:40:50.141244       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 19:40:50.141317       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 19:40:50.141334       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:40:50.141863       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 19:40:50.532590       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:40:50.532957       1 metrics.go:72] Registering metrics
	I1017 19:40:50.533055       1 controller.go:711] "Syncing nftables rules"
	I1017 19:41:00.141832       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:41:00.141929       1 main.go:301] handling current node
	I1017 19:41:10.143780       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:41:10.143837       1 main.go:301] handling current node
	I1017 19:41:20.141387       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:41:20.141427       1 main.go:301] handling current node
	I1017 19:41:30.142262       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:41:30.142310       1 main.go:301] handling current node
	
	
	==> kube-apiserver [054c0ba11919a27c613a43b0283529cadb5c43fac2b53a9bac2aaa468326a52d] <==
	I1017 19:40:48.784602       1 shared_informer.go:318] Caches are synced for configmaps
	I1017 19:40:48.784660       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1017 19:40:48.784705       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1017 19:40:48.785034       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1017 19:40:48.785255       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1017 19:40:48.786387       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1017 19:40:48.786434       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 19:40:48.786564       1 aggregator.go:166] initial CRD sync complete...
	I1017 19:40:48.786622       1 autoregister_controller.go:141] Starting autoregister controller
	I1017 19:40:48.786648       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 19:40:48.786674       1 cache.go:39] Caches are synced for autoregister controller
	I1017 19:40:48.789104       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1017 19:40:49.692615       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:40:49.836225       1 controller.go:624] quota admission added evaluator for: namespaces
	I1017 19:40:49.880111       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1017 19:40:49.901969       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 19:40:49.911308       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 19:40:49.920517       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1017 19:40:49.964929       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.106.230"}
	I1017 19:40:49.980595       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.7.112"}
	I1017 19:41:01.682563       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 19:41:01.682609       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 19:41:01.685586       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1017 19:41:01.879677       1 controller.go:624] quota admission added evaluator for: endpoints
	I1017 19:41:01.879676       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [059b93c2a1d4e2bc4bdba5fd8d096798638e1a2899fc8316153e0e2480d7fc01] <==
	I1017 19:41:01.703635       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-tts96"
	I1017 19:41:01.711858       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="21.20073ms"
	I1017 19:41:01.715701       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="24.924325ms"
	I1017 19:41:01.722302       1 shared_informer.go:318] Caches are synced for job
	I1017 19:41:01.723041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.217831ms"
	I1017 19:41:01.723195       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="99.348µs"
	I1017 19:41:01.730603       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="18.677505ms"
	I1017 19:41:01.730716       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="69.898µs"
	I1017 19:41:01.738244       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.2µs"
	I1017 19:41:01.868960       1 shared_informer.go:318] Caches are synced for endpoint
	I1017 19:41:01.873000       1 shared_informer.go:318] Caches are synced for resource quota
	I1017 19:41:01.877723       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1017 19:41:01.887456       1 shared_informer.go:318] Caches are synced for resource quota
	I1017 19:41:02.203308       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 19:41:02.219869       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 19:41:02.219905       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1017 19:41:04.423707       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="110.269µs"
	I1017 19:41:05.429241       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.753µs"
	I1017 19:41:06.431727       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.681µs"
	I1017 19:41:10.448883       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.051365ms"
	I1017 19:41:10.449012       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="67.503µs"
	I1017 19:41:20.611608       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.929304ms"
	I1017 19:41:20.611755       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.259µs"
	I1017 19:41:25.489754       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="111.068µs"
	I1017 19:41:32.025068       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.134µs"
	
	
	==> kube-proxy [52ccc49f9b576e337a415e132dddb263f30a654ae3f3c7a05451e7f01db3687f] <==
	I1017 19:40:49.769172       1 server_others.go:69] "Using iptables proxy"
	I1017 19:40:49.780414       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1017 19:40:49.808931       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:40:49.812083       1 server_others.go:152] "Using iptables Proxier"
	I1017 19:40:49.812121       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1017 19:40:49.812128       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1017 19:40:49.812163       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1017 19:40:49.812384       1 server.go:846] "Version info" version="v1.28.0"
	I1017 19:40:49.812393       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:40:49.813154       1 config.go:315] "Starting node config controller"
	I1017 19:40:49.813237       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1017 19:40:49.814766       1 config.go:188] "Starting service config controller"
	I1017 19:40:49.814932       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1017 19:40:49.814781       1 config.go:97] "Starting endpoint slice config controller"
	I1017 19:40:49.814979       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1017 19:40:49.915032       1 shared_informer.go:318] Caches are synced for service config
	I1017 19:40:49.915102       1 shared_informer.go:318] Caches are synced for node config
	I1017 19:40:49.915230       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0aa671be2daa82154fa84103fd15b8447d2b25c3049ce697edb71872df1653db] <==
	I1017 19:40:46.345197       1 serving.go:348] Generated self-signed cert in-memory
	W1017 19:40:48.717097       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 19:40:48.717144       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 19:40:48.717183       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 19:40:48.717192       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 19:40:48.753071       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1017 19:40:48.753103       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:40:48.754868       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:40:48.754926       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1017 19:40:48.756497       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1017 19:40:48.756659       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1017 19:40:48.856052       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 17 19:41:01 old-k8s-version-907112 kubelet[723]: I1017 19:41:01.711434     723 topology_manager.go:215] "Topology Admit Handler" podUID="a7ce6c10-a999-4cd3-99b7-8431fa62b484" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-tts96"
	Oct 17 19:41:01 old-k8s-version-907112 kubelet[723]: I1017 19:41:01.794121     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a7ce6c10-a999-4cd3-99b7-8431fa62b484-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-tts96\" (UID: \"a7ce6c10-a999-4cd3-99b7-8431fa62b484\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tts96"
	Oct 17 19:41:01 old-k8s-version-907112 kubelet[723]: I1017 19:41:01.794283     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8prh\" (UniqueName: \"kubernetes.io/projected/d975038c-cb8d-4021-9882-0dd6334eb118-kube-api-access-z8prh\") pod \"kubernetes-dashboard-8694d4445c-lh28q\" (UID: \"d975038c-cb8d-4021-9882-0dd6334eb118\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lh28q"
	Oct 17 19:41:01 old-k8s-version-907112 kubelet[723]: I1017 19:41:01.794351     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d975038c-cb8d-4021-9882-0dd6334eb118-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-lh28q\" (UID: \"d975038c-cb8d-4021-9882-0dd6334eb118\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lh28q"
	Oct 17 19:41:01 old-k8s-version-907112 kubelet[723]: I1017 19:41:01.794438     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8fhz\" (UniqueName: \"kubernetes.io/projected/a7ce6c10-a999-4cd3-99b7-8431fa62b484-kube-api-access-c8fhz\") pod \"dashboard-metrics-scraper-5f989dc9cf-tts96\" (UID: \"a7ce6c10-a999-4cd3-99b7-8431fa62b484\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tts96"
	Oct 17 19:41:04 old-k8s-version-907112 kubelet[723]: I1017 19:41:04.411490     723 scope.go:117] "RemoveContainer" containerID="6631f01e880325aca48a0dffba57c8b2f1fdba2215eaca6c635aa77ef4e0cb73"
	Oct 17 19:41:05 old-k8s-version-907112 kubelet[723]: I1017 19:41:05.416076     723 scope.go:117] "RemoveContainer" containerID="6631f01e880325aca48a0dffba57c8b2f1fdba2215eaca6c635aa77ef4e0cb73"
	Oct 17 19:41:05 old-k8s-version-907112 kubelet[723]: I1017 19:41:05.416381     723 scope.go:117] "RemoveContainer" containerID="8df50d229107da72795fef3da5005c971bfdcb1b111d146da91ee87a2df81325"
	Oct 17 19:41:05 old-k8s-version-907112 kubelet[723]: E1017 19:41:05.416829     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-tts96_kubernetes-dashboard(a7ce6c10-a999-4cd3-99b7-8431fa62b484)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tts96" podUID="a7ce6c10-a999-4cd3-99b7-8431fa62b484"
	Oct 17 19:41:06 old-k8s-version-907112 kubelet[723]: I1017 19:41:06.420300     723 scope.go:117] "RemoveContainer" containerID="8df50d229107da72795fef3da5005c971bfdcb1b111d146da91ee87a2df81325"
	Oct 17 19:41:06 old-k8s-version-907112 kubelet[723]: E1017 19:41:06.420556     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-tts96_kubernetes-dashboard(a7ce6c10-a999-4cd3-99b7-8431fa62b484)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tts96" podUID="a7ce6c10-a999-4cd3-99b7-8431fa62b484"
	Oct 17 19:41:10 old-k8s-version-907112 kubelet[723]: I1017 19:41:10.442101     723 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lh28q" podStartSLOduration=1.6118768559999999 podCreationTimestamp="2025-10-17 19:41:01 +0000 UTC" firstStartedPulling="2025-10-17 19:41:02.038292185 +0000 UTC m=+16.807636659" lastFinishedPulling="2025-10-17 19:41:09.868452156 +0000 UTC m=+24.637796635" observedRunningTime="2025-10-17 19:41:10.441633286 +0000 UTC m=+25.210977775" watchObservedRunningTime="2025-10-17 19:41:10.442036832 +0000 UTC m=+25.211381322"
	Oct 17 19:41:12 old-k8s-version-907112 kubelet[723]: I1017 19:41:12.013771     723 scope.go:117] "RemoveContainer" containerID="8df50d229107da72795fef3da5005c971bfdcb1b111d146da91ee87a2df81325"
	Oct 17 19:41:12 old-k8s-version-907112 kubelet[723]: E1017 19:41:12.014073     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-tts96_kubernetes-dashboard(a7ce6c10-a999-4cd3-99b7-8431fa62b484)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tts96" podUID="a7ce6c10-a999-4cd3-99b7-8431fa62b484"
	Oct 17 19:41:20 old-k8s-version-907112 kubelet[723]: I1017 19:41:20.458837     723 scope.go:117] "RemoveContainer" containerID="1f22d5826138c6ffba6839da3f8f7c8bad03751a7d19957f7e94844c9d6c7fbf"
	Oct 17 19:41:25 old-k8s-version-907112 kubelet[723]: I1017 19:41:25.325097     723 scope.go:117] "RemoveContainer" containerID="8df50d229107da72795fef3da5005c971bfdcb1b111d146da91ee87a2df81325"
	Oct 17 19:41:25 old-k8s-version-907112 kubelet[723]: I1017 19:41:25.476570     723 scope.go:117] "RemoveContainer" containerID="8df50d229107da72795fef3da5005c971bfdcb1b111d146da91ee87a2df81325"
	Oct 17 19:41:25 old-k8s-version-907112 kubelet[723]: I1017 19:41:25.476828     723 scope.go:117] "RemoveContainer" containerID="da0f545a9585922305e1f2b72786c36437454935dc6843626a40eaddd980c678"
	Oct 17 19:41:25 old-k8s-version-907112 kubelet[723]: E1017 19:41:25.477244     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-tts96_kubernetes-dashboard(a7ce6c10-a999-4cd3-99b7-8431fa62b484)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tts96" podUID="a7ce6c10-a999-4cd3-99b7-8431fa62b484"
	Oct 17 19:41:32 old-k8s-version-907112 kubelet[723]: I1017 19:41:32.014515     723 scope.go:117] "RemoveContainer" containerID="da0f545a9585922305e1f2b72786c36437454935dc6843626a40eaddd980c678"
	Oct 17 19:41:32 old-k8s-version-907112 kubelet[723]: E1017 19:41:32.014865     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-tts96_kubernetes-dashboard(a7ce6c10-a999-4cd3-99b7-8431fa62b484)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tts96" podUID="a7ce6c10-a999-4cd3-99b7-8431fa62b484"
	Oct 17 19:41:34 old-k8s-version-907112 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 19:41:34 old-k8s-version-907112 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 19:41:34 old-k8s-version-907112 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 17 19:41:34 old-k8s-version-907112 systemd[1]: kubelet.service: Consumed 1.552s CPU time.
	
	
	==> kubernetes-dashboard [e12315128e22de98b736c6a0aef19edd3e650649a5ea832a8c589ed2015cd1d4] <==
	2025/10/17 19:41:09 Starting overwatch
	2025/10/17 19:41:09 Using namespace: kubernetes-dashboard
	2025/10/17 19:41:09 Using in-cluster config to connect to apiserver
	2025/10/17 19:41:09 Using secret token for csrf signing
	2025/10/17 19:41:09 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 19:41:09 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 19:41:09 Successful initial request to the apiserver, version: v1.28.0
	2025/10/17 19:41:09 Generating JWE encryption key
	2025/10/17 19:41:09 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 19:41:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 19:41:10 Initializing JWE encryption key from synchronized object
	2025/10/17 19:41:10 Creating in-cluster Sidecar client
	2025/10/17 19:41:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 19:41:10 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [1f22d5826138c6ffba6839da3f8f7c8bad03751a7d19957f7e94844c9d6c7fbf] <==
	I1017 19:40:49.734135       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 19:41:19.737141       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ca3002e51fbb7b46eb280826e262993a5bea288bfe8287e1a0d672392d3182f5] <==
	I1017 19:41:20.510143       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 19:41:20.521396       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 19:41:20.521445       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1017 19:41:37.922833       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 19:41:37.923261       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-907112_80f3ec4d-bbc2-4169-8fef-3c5cd7f637f1!
	I1017 19:41:37.923281       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3f9a20f1-f928-44b0-afc3-41b87fa18958", APIVersion:"v1", ResourceVersion:"620", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-907112_80f3ec4d-bbc2-4169-8fef-3c5cd7f637f1 became leader
	I1017 19:41:38.023942       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-907112_80f3ec4d-bbc2-4169-8fef-3c5cd7f637f1!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-907112 -n old-k8s-version-907112
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-907112 -n old-k8s-version-907112: exit status 2 (343.840626ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-907112 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-171807 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-171807 --alsologtostderr -v=1: exit status 80 (2.164931749s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-171807 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:42:05.263364  749971 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:42:05.263660  749971 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:42:05.263670  749971 out.go:374] Setting ErrFile to fd 2...
	I1017 19:42:05.263674  749971 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:42:05.263965  749971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:42:05.264257  749971 out.go:368] Setting JSON to false
	I1017 19:42:05.264308  749971 mustload.go:65] Loading cluster: no-preload-171807
	I1017 19:42:05.264716  749971 config.go:182] Loaded profile config "no-preload-171807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:42:05.265171  749971 cli_runner.go:164] Run: docker container inspect no-preload-171807 --format={{.State.Status}}
	I1017 19:42:05.283531  749971 host.go:66] Checking if "no-preload-171807" exists ...
	I1017 19:42:05.283863  749971 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:42:05.353904  749971 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-17 19:42:05.341420937 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:42:05.354655  749971 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-171807 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 19:42:05.357185  749971 out.go:179] * Pausing node no-preload-171807 ... 
	I1017 19:42:05.358506  749971 host.go:66] Checking if "no-preload-171807" exists ...
	I1017 19:42:05.358901  749971 ssh_runner.go:195] Run: systemctl --version
	I1017 19:42:05.358961  749971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-171807
	I1017 19:42:05.380864  749971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/no-preload-171807/id_rsa Username:docker}
	I1017 19:42:05.485204  749971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:42:05.501322  749971 pause.go:52] kubelet running: true
	I1017 19:42:05.501398  749971 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 19:42:05.701284  749971 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 19:42:05.701438  749971 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 19:42:05.792396  749971 cri.go:89] found id: "c38fc3e3e5753ebdf5ea7669f5ca235a915e6ca85e02b4d3a1dd0a1412bfb0b3"
	I1017 19:42:05.792429  749971 cri.go:89] found id: "835887455a526598d2d867876cd5a46611eab57d28140e1ba67e9ee8f72601e5"
	I1017 19:42:05.792435  749971 cri.go:89] found id: "a2184126b0f26d397ffbbb79f922291dad5e971092ca6caa2f3d7d4cb54166c9"
	I1017 19:42:05.792440  749971 cri.go:89] found id: "d022a76c654d2e18ebf220443cc9aab41bb02d48d7f4800b39daf43d8ce2eea1"
	I1017 19:42:05.792444  749971 cri.go:89] found id: "8604f98158605205b8f1f8315ebc37171cf7eca33ac7f8dff67117b30bbd6b4d"
	I1017 19:42:05.792450  749971 cri.go:89] found id: "d86dd76d8b3bd2505d622c4f7afdac7241ad790540b4197dfa7a873877fdd920"
	I1017 19:42:05.792546  749971 cri.go:89] found id: "2c72f7d2bb251ff207976219245143bbd296d8b6a6495c2e5556d0e9da8f1099"
	I1017 19:42:05.792571  749971 cri.go:89] found id: "2e00090e4a67b40ac53e71a16e43401493b444c9846af2e602339d93281be030"
	I1017 19:42:05.792577  749971 cri.go:89] found id: "3c4af638c6379e21034b2badcf605ec633afc47f689a92da70fdcdf1faa4d286"
	I1017 19:42:05.792586  749971 cri.go:89] found id: "b00a978ba6c2ba2beea9f7bc631934a305976c12436f5a13772cbbabda6c49c3"
	I1017 19:42:05.792590  749971 cri.go:89] found id: "e35ca6f1c73b7d72497bda5266b591c7c57a2476a6ec5fa6c61165d1cdde7cad"
	I1017 19:42:05.792594  749971 cri.go:89] found id: ""
	I1017 19:42:05.792650  749971 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:42:05.809570  749971 retry.go:31] will retry after 155.070264ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:42:05Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:42:05.964845  749971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:42:05.981483  749971 pause.go:52] kubelet running: false
	I1017 19:42:05.981542  749971 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 19:42:06.147798  749971 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 19:42:06.147925  749971 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 19:42:06.220054  749971 cri.go:89] found id: "c38fc3e3e5753ebdf5ea7669f5ca235a915e6ca85e02b4d3a1dd0a1412bfb0b3"
	I1017 19:42:06.220105  749971 cri.go:89] found id: "835887455a526598d2d867876cd5a46611eab57d28140e1ba67e9ee8f72601e5"
	I1017 19:42:06.220111  749971 cri.go:89] found id: "a2184126b0f26d397ffbbb79f922291dad5e971092ca6caa2f3d7d4cb54166c9"
	I1017 19:42:06.220116  749971 cri.go:89] found id: "d022a76c654d2e18ebf220443cc9aab41bb02d48d7f4800b39daf43d8ce2eea1"
	I1017 19:42:06.220120  749971 cri.go:89] found id: "8604f98158605205b8f1f8315ebc37171cf7eca33ac7f8dff67117b30bbd6b4d"
	I1017 19:42:06.220125  749971 cri.go:89] found id: "d86dd76d8b3bd2505d622c4f7afdac7241ad790540b4197dfa7a873877fdd920"
	I1017 19:42:06.220129  749971 cri.go:89] found id: "2c72f7d2bb251ff207976219245143bbd296d8b6a6495c2e5556d0e9da8f1099"
	I1017 19:42:06.220133  749971 cri.go:89] found id: "2e00090e4a67b40ac53e71a16e43401493b444c9846af2e602339d93281be030"
	I1017 19:42:06.220135  749971 cri.go:89] found id: "3c4af638c6379e21034b2badcf605ec633afc47f689a92da70fdcdf1faa4d286"
	I1017 19:42:06.220159  749971 cri.go:89] found id: "b00a978ba6c2ba2beea9f7bc631934a305976c12436f5a13772cbbabda6c49c3"
	I1017 19:42:06.220168  749971 cri.go:89] found id: "e35ca6f1c73b7d72497bda5266b591c7c57a2476a6ec5fa6c61165d1cdde7cad"
	I1017 19:42:06.220173  749971 cri.go:89] found id: ""
	I1017 19:42:06.220223  749971 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:42:06.232850  749971 retry.go:31] will retry after 227.125669ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:42:06Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:42:06.461187  749971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:42:06.478469  749971 pause.go:52] kubelet running: false
	I1017 19:42:06.478537  749971 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 19:42:06.645031  749971 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 19:42:06.645123  749971 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 19:42:06.732942  749971 cri.go:89] found id: "c38fc3e3e5753ebdf5ea7669f5ca235a915e6ca85e02b4d3a1dd0a1412bfb0b3"
	I1017 19:42:06.732969  749971 cri.go:89] found id: "835887455a526598d2d867876cd5a46611eab57d28140e1ba67e9ee8f72601e5"
	I1017 19:42:06.732975  749971 cri.go:89] found id: "a2184126b0f26d397ffbbb79f922291dad5e971092ca6caa2f3d7d4cb54166c9"
	I1017 19:42:06.732980  749971 cri.go:89] found id: "d022a76c654d2e18ebf220443cc9aab41bb02d48d7f4800b39daf43d8ce2eea1"
	I1017 19:42:06.732985  749971 cri.go:89] found id: "8604f98158605205b8f1f8315ebc37171cf7eca33ac7f8dff67117b30bbd6b4d"
	I1017 19:42:06.732991  749971 cri.go:89] found id: "d86dd76d8b3bd2505d622c4f7afdac7241ad790540b4197dfa7a873877fdd920"
	I1017 19:42:06.732996  749971 cri.go:89] found id: "2c72f7d2bb251ff207976219245143bbd296d8b6a6495c2e5556d0e9da8f1099"
	I1017 19:42:06.733000  749971 cri.go:89] found id: "2e00090e4a67b40ac53e71a16e43401493b444c9846af2e602339d93281be030"
	I1017 19:42:06.733004  749971 cri.go:89] found id: "3c4af638c6379e21034b2badcf605ec633afc47f689a92da70fdcdf1faa4d286"
	I1017 19:42:06.733014  749971 cri.go:89] found id: "b00a978ba6c2ba2beea9f7bc631934a305976c12436f5a13772cbbabda6c49c3"
	I1017 19:42:06.733019  749971 cri.go:89] found id: "e35ca6f1c73b7d72497bda5266b591c7c57a2476a6ec5fa6c61165d1cdde7cad"
	I1017 19:42:06.733023  749971 cri.go:89] found id: ""
	I1017 19:42:06.733074  749971 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:42:06.753081  749971 retry.go:31] will retry after 344.984495ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:42:06Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:42:07.098823  749971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:42:07.113381  749971 pause.go:52] kubelet running: false
	I1017 19:42:07.113448  749971 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 19:42:07.273818  749971 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 19:42:07.273921  749971 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 19:42:07.351820  749971 cri.go:89] found id: "c38fc3e3e5753ebdf5ea7669f5ca235a915e6ca85e02b4d3a1dd0a1412bfb0b3"
	I1017 19:42:07.351850  749971 cri.go:89] found id: "835887455a526598d2d867876cd5a46611eab57d28140e1ba67e9ee8f72601e5"
	I1017 19:42:07.351855  749971 cri.go:89] found id: "a2184126b0f26d397ffbbb79f922291dad5e971092ca6caa2f3d7d4cb54166c9"
	I1017 19:42:07.351860  749971 cri.go:89] found id: "d022a76c654d2e18ebf220443cc9aab41bb02d48d7f4800b39daf43d8ce2eea1"
	I1017 19:42:07.351865  749971 cri.go:89] found id: "8604f98158605205b8f1f8315ebc37171cf7eca33ac7f8dff67117b30bbd6b4d"
	I1017 19:42:07.351871  749971 cri.go:89] found id: "d86dd76d8b3bd2505d622c4f7afdac7241ad790540b4197dfa7a873877fdd920"
	I1017 19:42:07.351875  749971 cri.go:89] found id: "2c72f7d2bb251ff207976219245143bbd296d8b6a6495c2e5556d0e9da8f1099"
	I1017 19:42:07.351881  749971 cri.go:89] found id: "2e00090e4a67b40ac53e71a16e43401493b444c9846af2e602339d93281be030"
	I1017 19:42:07.351885  749971 cri.go:89] found id: "3c4af638c6379e21034b2badcf605ec633afc47f689a92da70fdcdf1faa4d286"
	I1017 19:42:07.351893  749971 cri.go:89] found id: "b00a978ba6c2ba2beea9f7bc631934a305976c12436f5a13772cbbabda6c49c3"
	I1017 19:42:07.351897  749971 cri.go:89] found id: "e35ca6f1c73b7d72497bda5266b591c7c57a2476a6ec5fa6c61165d1cdde7cad"
	I1017 19:42:07.351902  749971 cri.go:89] found id: ""
	I1017 19:42:07.351949  749971 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:42:07.368946  749971 out.go:203] 
	W1017 19:42:07.370295  749971 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:42:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:42:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:42:07.370319  749971 out.go:285] * 
	* 
	W1017 19:42:07.375290  749971 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:42:07.377584  749971 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-171807 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-171807
helpers_test.go:243: (dbg) docker inspect no-preload-171807:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6738402fa93e143430ae2d5b8e2230a70ebaadd4b5f882988414cd70bfdd23a5",
	        "Created": "2025-10-17T19:39:49.424559642Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 737031,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:41:08.552977888Z",
	            "FinishedAt": "2025-10-17T19:41:07.200630261Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/6738402fa93e143430ae2d5b8e2230a70ebaadd4b5f882988414cd70bfdd23a5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6738402fa93e143430ae2d5b8e2230a70ebaadd4b5f882988414cd70bfdd23a5/hostname",
	        "HostsPath": "/var/lib/docker/containers/6738402fa93e143430ae2d5b8e2230a70ebaadd4b5f882988414cd70bfdd23a5/hosts",
	        "LogPath": "/var/lib/docker/containers/6738402fa93e143430ae2d5b8e2230a70ebaadd4b5f882988414cd70bfdd23a5/6738402fa93e143430ae2d5b8e2230a70ebaadd4b5f882988414cd70bfdd23a5-json.log",
	        "Name": "/no-preload-171807",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-171807:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-171807",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6738402fa93e143430ae2d5b8e2230a70ebaadd4b5f882988414cd70bfdd23a5",
	                "LowerDir": "/var/lib/docker/overlay2/2465273c560fa18f3af90b746f46f6002d9f83f3da22434fa2cf4768a02a24de-init/diff:/var/lib/docker/overlay2/dbfb6a42e05d15debefb7c829b0dbabbe558b70da40f1ab4f30d27e0dda96088/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2465273c560fa18f3af90b746f46f6002d9f83f3da22434fa2cf4768a02a24de/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2465273c560fa18f3af90b746f46f6002d9f83f3da22434fa2cf4768a02a24de/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2465273c560fa18f3af90b746f46f6002d9f83f3da22434fa2cf4768a02a24de/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-171807",
	                "Source": "/var/lib/docker/volumes/no-preload-171807/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-171807",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-171807",
	                "name.minikube.sigs.k8s.io": "no-preload-171807",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "64f62ea835f43cf3044abc0f0847d4ed2b8981195777d845bb804a8fc1a98665",
	            "SandboxKey": "/var/run/docker/netns/64f62ea835f4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-171807": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:35:2e:52:01:4b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4d20f1cdd8a9ad4b75566b03de0ba176c437b8596d360733d4786d1a9071e68d",
	                    "EndpointID": "cf4b48de40fe4efc66e471ceeb9ffe9d78c77e169f71f2c651ec88b58a8bc4e1",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-171807",
	                        "6738402fa93e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-171807 -n no-preload-171807
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-171807 -n no-preload-171807: exit status 2 (357.691294ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-171807 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-171807 logs -n 25: (1.226799181s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p pause-022753                                                                                                                                                                                                                               │ pause-022753                 │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:39 UTC │
	│ start   │ -p no-preload-171807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:40 UTC │
	│ start   │ -p cert-expiration-141205 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-141205       │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ delete  │ -p cert-expiration-141205                                                                                                                                                                                                                     │ cert-expiration-141205       │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-907112 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │                     │
	│ start   │ -p embed-certs-599709 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:41 UTC │
	│ stop    │ -p old-k8s-version-907112 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-907112 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ start   │ -p old-k8s-version-907112 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable metrics-server -p no-preload-171807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │                     │
	│ stop    │ -p no-preload-171807 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p no-preload-171807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-171807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-599709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	│ stop    │ -p embed-certs-599709 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p embed-certs-599709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ start   │ -p embed-certs-599709 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	│ image   │ old-k8s-version-907112 image list --format=json                                                                                                                                                                                               │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ pause   │ -p old-k8s-version-907112 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	│ delete  │ -p old-k8s-version-907112                                                                                                                                                                                                                     │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-907112                                                                                                                                                                                                                     │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ delete  │ -p disable-driver-mounts-220565                                                                                                                                                                                                               │ disable-driver-mounts-220565 │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ start   │ -p default-k8s-diff-port-112878 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	│ image   │ no-preload-171807 image list --format=json                                                                                                                                                                                                    │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ pause   │ -p no-preload-171807 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:41:43
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:41:43.616967  745903 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:41:43.617358  745903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:41:43.617370  745903 out.go:374] Setting ErrFile to fd 2...
	I1017 19:41:43.617376  745903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:41:43.617742  745903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:41:43.618276  745903 out.go:368] Setting JSON to false
	I1017 19:41:43.620011  745903 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12243,"bootTime":1760717861,"procs":354,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:41:43.620210  745903 start.go:141] virtualization: kvm guest
	I1017 19:41:43.622362  745903 out.go:179] * [default-k8s-diff-port-112878] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:41:43.624054  745903 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:41:43.624039  745903 notify.go:220] Checking for updates...
	I1017 19:41:43.626637  745903 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:41:43.628483  745903 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:41:43.629882  745903 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 19:41:43.631360  745903 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:41:43.632653  745903 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:41:43.634991  745903 config.go:182] Loaded profile config "embed-certs-599709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:41:43.635125  745903 config.go:182] Loaded profile config "kubernetes-upgrade-137244": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:41:43.635241  745903 config.go:182] Loaded profile config "no-preload-171807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:41:43.635365  745903 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:41:43.666650  745903 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:41:43.666770  745903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:41:43.742138  745903 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 19:41:43.728916302 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:41:43.742286  745903 docker.go:318] overlay module found
	I1017 19:41:43.745204  745903 out.go:179] * Using the docker driver based on user configuration
	I1017 19:41:38.938295  741107 addons.go:514] duration metric: took 2.412388436s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1017 19:41:39.424761  741107 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1017 19:41:39.430465  741107 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 19:41:39.430498  741107 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 19:41:39.924838  741107 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1017 19:41:39.929414  741107 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1017 19:41:39.930574  741107 api_server.go:141] control plane version: v1.34.1
	I1017 19:41:39.930605  741107 api_server.go:131] duration metric: took 1.006244433s to wait for apiserver health ...
	I1017 19:41:39.930616  741107 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:41:39.934755  741107 system_pods.go:59] 8 kube-system pods found
	I1017 19:41:39.934805  741107 system_pods.go:61] "coredns-66bc5c9577-v8hls" [a5c14de3-5736-4bb4-b7d4-7eee1aade5e2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:41:39.934817  741107 system_pods.go:61] "etcd-embed-certs-599709" [bb79f8c8-ab08-444c-9a40-a5350363cc1e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:41:39.934825  741107 system_pods.go:61] "kindnet-sj7sj" [7e5aa5b6-57e8-4ad9-9b23-53eeffd10715] Running
	I1017 19:41:39.934834  741107 system_pods.go:61] "kube-apiserver-embed-certs-599709" [a32d29b8-0363-444d-9b3c-7783f55fa404] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:41:39.934845  741107 system_pods.go:61] "kube-controller-manager-embed-certs-599709" [e88ac6d0-9e7a-4fcd-ac5a-b39168c76bcf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:41:39.934861  741107 system_pods.go:61] "kube-proxy-l2pwz" [1ea9dbf3-19b4-4b54-95c1-df8fa679f2bb] Running
	I1017 19:41:39.934870  741107 system_pods.go:61] "kube-scheduler-embed-certs-599709" [6d1db335-0b58-4714-b27a-502897391843] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:41:39.934875  741107 system_pods.go:61] "storage-provisioner" [2d8a3a4d-3738-4d33-98fd-b99622f860ec] Running
	I1017 19:41:39.934886  741107 system_pods.go:74] duration metric: took 4.262687ms to wait for pod list to return data ...
	I1017 19:41:39.934898  741107 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:41:39.938025  741107 default_sa.go:45] found service account: "default"
	I1017 19:41:39.938051  741107 default_sa.go:55] duration metric: took 3.145055ms for default service account to be created ...
	I1017 19:41:39.938061  741107 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 19:41:39.941165  741107 system_pods.go:86] 8 kube-system pods found
	I1017 19:41:39.941193  741107 system_pods.go:89] "coredns-66bc5c9577-v8hls" [a5c14de3-5736-4bb4-b7d4-7eee1aade5e2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:41:39.941202  741107 system_pods.go:89] "etcd-embed-certs-599709" [bb79f8c8-ab08-444c-9a40-a5350363cc1e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:41:39.941209  741107 system_pods.go:89] "kindnet-sj7sj" [7e5aa5b6-57e8-4ad9-9b23-53eeffd10715] Running
	I1017 19:41:39.941223  741107 system_pods.go:89] "kube-apiserver-embed-certs-599709" [a32d29b8-0363-444d-9b3c-7783f55fa404] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:41:39.941235  741107 system_pods.go:89] "kube-controller-manager-embed-certs-599709" [e88ac6d0-9e7a-4fcd-ac5a-b39168c76bcf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:41:39.941243  741107 system_pods.go:89] "kube-proxy-l2pwz" [1ea9dbf3-19b4-4b54-95c1-df8fa679f2bb] Running
	I1017 19:41:39.941254  741107 system_pods.go:89] "kube-scheduler-embed-certs-599709" [6d1db335-0b58-4714-b27a-502897391843] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:41:39.941261  741107 system_pods.go:89] "storage-provisioner" [2d8a3a4d-3738-4d33-98fd-b99622f860ec] Running
	I1017 19:41:39.941270  741107 system_pods.go:126] duration metric: took 3.202094ms to wait for k8s-apps to be running ...
	I1017 19:41:39.941280  741107 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:41:39.941337  741107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:41:39.956251  741107 system_svc.go:56] duration metric: took 14.959792ms WaitForService to wait for kubelet
	I1017 19:41:39.956280  741107 kubeadm.go:586] duration metric: took 3.430460959s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:41:39.956299  741107 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:41:39.959465  741107 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 19:41:39.959498  741107 node_conditions.go:123] node cpu capacity is 8
	I1017 19:41:39.959516  741107 node_conditions.go:105] duration metric: took 3.212196ms to run NodePressure ...
	I1017 19:41:39.959532  741107 start.go:241] waiting for startup goroutines ...
	I1017 19:41:39.959545  741107 start.go:246] waiting for cluster config update ...
	I1017 19:41:39.959562  741107 start.go:255] writing updated cluster config ...
	I1017 19:41:39.959870  741107 ssh_runner.go:195] Run: rm -f paused
	I1017 19:41:39.964034  741107 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:41:39.968215  741107 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-v8hls" in "kube-system" namespace to be "Ready" or be gone ...
	W1017 19:41:41.974818  741107 pod_ready.go:104] pod "coredns-66bc5c9577-v8hls" is not "Ready", error: <nil>
	I1017 19:41:43.747723  745903 start.go:305] selected driver: docker
	I1017 19:41:43.747745  745903 start.go:925] validating driver "docker" against <nil>
	I1017 19:41:43.747765  745903 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:41:43.748599  745903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:41:43.834246  745903 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 19:41:43.820003123 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:41:43.834457  745903 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 19:41:43.834794  745903 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:41:43.836542  745903 out.go:179] * Using Docker driver with root privileges
	I1017 19:41:43.837654  745903 cni.go:84] Creating CNI manager for ""
	I1017 19:41:43.837752  745903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:41:43.837766  745903 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 19:41:43.837870  745903 start.go:349] cluster config:
	{Name:default-k8s-diff-port-112878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-112878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:41:43.839819  745903 out.go:179] * Starting "default-k8s-diff-port-112878" primary control-plane node in "default-k8s-diff-port-112878" cluster
	I1017 19:41:43.841766  745903 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:41:43.843336  745903 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:41:43.844530  745903 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:41:43.844589  745903 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:41:43.844606  745903 cache.go:58] Caching tarball of preloaded images
	I1017 19:41:43.844597  745903 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:41:43.844760  745903 preload.go:233] Found /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:41:43.844775  745903 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:41:43.844935  745903 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/config.json ...
	I1017 19:41:43.844961  745903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/config.json: {Name:mk460bf2d77dd3a84f681f1f712690b68fc42abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:41:43.875006  745903 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:41:43.875056  745903 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:41:43.875078  745903 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:41:43.875117  745903 start.go:360] acquireMachinesLock for default-k8s-diff-port-112878: {Name:mke65bf3d91761a71e610a747337c18b9c7b5f17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:41:43.875239  745903 start.go:364] duration metric: took 96.42µs to acquireMachinesLock for "default-k8s-diff-port-112878"
	I1017 19:41:43.875268  745903 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-112878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-112878 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:41:43.875375  745903 start.go:125] createHost starting for "" (driver="docker")
	I1017 19:41:43.762832  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:41:43.763278  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:41:43.763335  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:41:43.763389  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:41:43.811132  696997 cri.go:89] found id: "5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:43.811153  696997 cri.go:89] found id: ""
	I1017 19:41:43.811164  696997 logs.go:282] 1 containers: [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690]
	I1017 19:41:43.811234  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:43.817567  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:41:43.817676  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:41:43.856013  696997 cri.go:89] found id: ""
	I1017 19:41:43.856042  696997 logs.go:282] 0 containers: []
	W1017 19:41:43.856054  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:41:43.856062  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:41:43.856124  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:41:43.894920  696997 cri.go:89] found id: ""
	I1017 19:41:43.894946  696997 logs.go:282] 0 containers: []
	W1017 19:41:43.894957  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:41:43.894965  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:41:43.895031  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:41:43.932368  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:43.932392  696997 cri.go:89] found id: ""
	I1017 19:41:43.932403  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:41:43.932461  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:43.938431  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:41:43.938507  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:41:43.979712  696997 cri.go:89] found id: ""
	I1017 19:41:43.979742  696997 logs.go:282] 0 containers: []
	W1017 19:41:43.979752  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:41:43.979760  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:41:43.979832  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:41:44.020552  696997 cri.go:89] found id: "97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:41:44.020581  696997 cri.go:89] found id: "ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:44.020587  696997 cri.go:89] found id: ""
	I1017 19:41:44.020598  696997 logs.go:282] 2 containers: [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770]
	I1017 19:41:44.020665  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:44.025980  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:44.031254  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:41:44.031330  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:41:44.069342  696997 cri.go:89] found id: ""
	I1017 19:41:44.069373  696997 logs.go:282] 0 containers: []
	W1017 19:41:44.069387  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:41:44.069435  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:41:44.069509  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:41:44.106630  696997 cri.go:89] found id: ""
	I1017 19:41:44.106663  696997 logs.go:282] 0 containers: []
	W1017 19:41:44.106675  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:41:44.106720  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:41:44.106742  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:41:44.182299  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:41:44.182353  696997 logs.go:123] Gathering logs for kube-controller-manager [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe] ...
	I1017 19:41:44.182377  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:41:44.219169  696997 logs.go:123] Gathering logs for kube-controller-manager [ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770] ...
	I1017 19:41:44.219201  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:44.258283  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:41:44.258324  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:41:44.342259  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:41:44.342312  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:41:44.387622  696997 logs.go:123] Gathering logs for kube-apiserver [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690] ...
	I1017 19:41:44.387663  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:44.436656  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:41:44.436709  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:44.520151  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:41:44.520199  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:41:44.656916  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:41:44.656962  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:41:47.184769  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:41:47.185282  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:41:47.185353  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:41:47.185420  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:41:47.223769  696997 cri.go:89] found id: "5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:47.223797  696997 cri.go:89] found id: ""
	I1017 19:41:47.223807  696997 logs.go:282] 1 containers: [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690]
	I1017 19:41:47.223867  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:47.229382  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:41:47.229458  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:41:47.266903  696997 cri.go:89] found id: ""
	I1017 19:41:47.266933  696997 logs.go:282] 0 containers: []
	W1017 19:41:47.266944  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:41:47.266952  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:41:47.267018  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:41:47.307547  696997 cri.go:89] found id: ""
	I1017 19:41:47.307580  696997 logs.go:282] 0 containers: []
	W1017 19:41:47.307593  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:41:47.307602  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:41:47.307666  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:41:47.345899  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:47.345938  696997 cri.go:89] found id: ""
	I1017 19:41:47.345950  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:41:47.346017  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:47.351849  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:41:47.351926  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:41:47.392001  696997 cri.go:89] found id: ""
	I1017 19:41:47.392035  696997 logs.go:282] 0 containers: []
	W1017 19:41:47.392048  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:41:47.392056  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:41:47.392122  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	W1017 19:41:45.089814  736846 pod_ready.go:104] pod "coredns-66bc5c9577-gnx5k" is not "Ready", error: <nil>
	W1017 19:41:47.589488  736846 pod_ready.go:104] pod "coredns-66bc5c9577-gnx5k" is not "Ready", error: <nil>
	I1017 19:41:43.879206  745903 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 19:41:43.879517  745903 start.go:159] libmachine.API.Create for "default-k8s-diff-port-112878" (driver="docker")
	I1017 19:41:43.879567  745903 client.go:168] LocalClient.Create starting
	I1017 19:41:43.879704  745903 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem
	I1017 19:41:43.879749  745903 main.go:141] libmachine: Decoding PEM data...
	I1017 19:41:43.879767  745903 main.go:141] libmachine: Parsing certificate...
	I1017 19:41:43.879835  745903 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem
	I1017 19:41:43.879856  745903 main.go:141] libmachine: Decoding PEM data...
	I1017 19:41:43.879870  745903 main.go:141] libmachine: Parsing certificate...
	I1017 19:41:43.880388  745903 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-112878 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 19:41:43.905749  745903 cli_runner.go:211] docker network inspect default-k8s-diff-port-112878 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 19:41:43.905868  745903 network_create.go:284] running [docker network inspect default-k8s-diff-port-112878] to gather additional debugging logs...
	I1017 19:41:43.905890  745903 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-112878
	W1017 19:41:43.930692  745903 cli_runner.go:211] docker network inspect default-k8s-diff-port-112878 returned with exit code 1
	I1017 19:41:43.930807  745903 network_create.go:287] error running [docker network inspect default-k8s-diff-port-112878]: docker network inspect default-k8s-diff-port-112878: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-112878 not found
	I1017 19:41:43.930847  745903 network_create.go:289] output of [docker network inspect default-k8s-diff-port-112878]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-112878 not found
	
	** /stderr **
	I1017 19:41:43.930993  745903 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:41:43.955597  745903 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-730d915fa684 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:e2:02:cd:78:1c:78} reservation:<nil>}
	I1017 19:41:43.956743  745903 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c0eb20920271 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:96:b3:b5:eb:1f:90} reservation:<nil>}
	I1017 19:41:43.957396  745903 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b9c5a6663579 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:42:f3:20:fa:08:4c} reservation:<nil>}
	I1017 19:41:43.958158  745903 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ff724deaa8b6 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:de:b9:ef:51:e1:16} reservation:<nil>}
	I1017 19:41:43.959251  745903 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001edf1c0}
	I1017 19:41:43.959278  745903 network_create.go:124] attempt to create docker network default-k8s-diff-port-112878 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1017 19:41:43.959337  745903 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-112878 default-k8s-diff-port-112878
	I1017 19:41:44.039643  745903 network_create.go:108] docker network default-k8s-diff-port-112878 192.168.85.0/24 created
	I1017 19:41:44.039698  745903 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-112878" container
	I1017 19:41:44.039799  745903 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 19:41:44.063696  745903 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-112878 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-112878 --label created_by.minikube.sigs.k8s.io=true
	I1017 19:41:44.088602  745903 oci.go:103] Successfully created a docker volume default-k8s-diff-port-112878
	I1017 19:41:44.088703  745903 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-112878-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-112878 --entrypoint /usr/bin/test -v default-k8s-diff-port-112878:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 19:41:44.686628  745903 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-112878
	I1017 19:41:44.686677  745903 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:41:44.686734  745903 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 19:41:44.686842  745903 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-112878:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1017 19:41:43.978630  741107 pod_ready.go:104] pod "coredns-66bc5c9577-v8hls" is not "Ready", error: <nil>
	W1017 19:41:46.475028  741107 pod_ready.go:104] pod "coredns-66bc5c9577-v8hls" is not "Ready", error: <nil>
	I1017 19:41:47.425699  696997 cri.go:89] found id: "97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:41:47.425726  696997 cri.go:89] found id: "ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:47.425731  696997 cri.go:89] found id: ""
	I1017 19:41:47.425740  696997 logs.go:282] 2 containers: [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770]
	I1017 19:41:47.425808  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:47.430323  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:47.435004  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:41:47.435082  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:41:47.466554  696997 cri.go:89] found id: ""
	I1017 19:41:47.466590  696997 logs.go:282] 0 containers: []
	W1017 19:41:47.466601  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:41:47.466609  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:41:47.466667  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:41:47.500006  696997 cri.go:89] found id: ""
	I1017 19:41:47.500088  696997 logs.go:282] 0 containers: []
	W1017 19:41:47.500110  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:41:47.500135  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:41:47.500155  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:41:47.619242  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:41:47.619283  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:41:47.645496  696997 logs.go:123] Gathering logs for kube-apiserver [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690] ...
	I1017 19:41:47.645538  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:47.691843  696997 logs.go:123] Gathering logs for kube-controller-manager [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe] ...
	I1017 19:41:47.691887  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:41:47.732742  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:41:47.732778  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:41:47.816041  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:41:47.816089  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:41:47.859924  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:41:47.859956  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:41:47.940213  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:41:47.940236  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:41:47.940253  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:48.016195  696997 logs.go:123] Gathering logs for kube-controller-manager [ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770] ...
	I1017 19:41:48.016245  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:50.557764  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:41:50.558213  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:41:50.558272  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:41:50.558325  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:41:50.590784  696997 cri.go:89] found id: "5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:50.590810  696997 cri.go:89] found id: ""
	I1017 19:41:50.590820  696997 logs.go:282] 1 containers: [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690]
	I1017 19:41:50.590881  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:50.595382  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:41:50.595466  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:41:50.627480  696997 cri.go:89] found id: ""
	I1017 19:41:50.627515  696997 logs.go:282] 0 containers: []
	W1017 19:41:50.627526  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:41:50.627534  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:41:50.627613  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:41:50.657135  696997 cri.go:89] found id: ""
	I1017 19:41:50.657171  696997 logs.go:282] 0 containers: []
	W1017 19:41:50.657183  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:41:50.657190  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:41:50.657243  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:41:50.686194  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:50.686216  696997 cri.go:89] found id: ""
	I1017 19:41:50.686224  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:41:50.686275  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:50.690940  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:41:50.691002  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:41:50.719897  696997 cri.go:89] found id: ""
	I1017 19:41:50.719931  696997 logs.go:282] 0 containers: []
	W1017 19:41:50.719945  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:41:50.719953  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:41:50.720021  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:41:50.750535  696997 cri.go:89] found id: "97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:41:50.750558  696997 cri.go:89] found id: "ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:50.750562  696997 cri.go:89] found id: ""
	I1017 19:41:50.750570  696997 logs.go:282] 2 containers: [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770]
	I1017 19:41:50.750619  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:50.755126  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:50.759801  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:41:50.759882  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:41:50.788964  696997 cri.go:89] found id: ""
	I1017 19:41:50.788991  696997 logs.go:282] 0 containers: []
	W1017 19:41:50.788999  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:41:50.789006  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:41:50.789067  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:41:50.818768  696997 cri.go:89] found id: ""
	I1017 19:41:50.818796  696997 logs.go:282] 0 containers: []
	W1017 19:41:50.818808  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:41:50.818843  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:41:50.818862  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:41:50.836186  696997 logs.go:123] Gathering logs for kube-apiserver [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690] ...
	I1017 19:41:50.836219  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:50.871170  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:41:50.871204  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:50.926523  696997 logs.go:123] Gathering logs for kube-controller-manager [ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770] ...
	I1017 19:41:50.926566  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:50.958628  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:41:50.958660  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:41:51.018884  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:41:51.018922  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:41:51.055435  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:41:51.055475  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:41:51.182896  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:41:51.182938  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:41:51.251894  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:41:51.251915  696997 logs.go:123] Gathering logs for kube-controller-manager [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe] ...
	I1017 19:41:51.251929  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	W1017 19:41:49.731782  736846 pod_ready.go:104] pod "coredns-66bc5c9577-gnx5k" is not "Ready", error: <nil>
	I1017 19:41:52.090000  736846 pod_ready.go:94] pod "coredns-66bc5c9577-gnx5k" is "Ready"
	I1017 19:41:52.090038  736846 pod_ready.go:86] duration metric: took 33.506840151s for pod "coredns-66bc5c9577-gnx5k" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:41:52.093480  736846 pod_ready.go:83] waiting for pod "etcd-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:41:52.098812  736846 pod_ready.go:94] pod "etcd-no-preload-171807" is "Ready"
	I1017 19:41:52.098846  736846 pod_ready.go:86] duration metric: took 5.338264ms for pod "etcd-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:41:52.101309  736846 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:41:52.106387  736846 pod_ready.go:94] pod "kube-apiserver-no-preload-171807" is "Ready"
	I1017 19:41:52.106419  736846 pod_ready.go:86] duration metric: took 5.082393ms for pod "kube-apiserver-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:41:52.108913  736846 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:41:52.287392  736846 pod_ready.go:94] pod "kube-controller-manager-no-preload-171807" is "Ready"
	I1017 19:41:52.287421  736846 pod_ready.go:86] duration metric: took 178.480253ms for pod "kube-controller-manager-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:41:52.487984  736846 pod_ready.go:83] waiting for pod "kube-proxy-cdbjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:41:52.887204  736846 pod_ready.go:94] pod "kube-proxy-cdbjg" is "Ready"
	I1017 19:41:52.887238  736846 pod_ready.go:86] duration metric: took 399.228226ms for pod "kube-proxy-cdbjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:41:53.087631  736846 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:41:53.487223  736846 pod_ready.go:94] pod "kube-scheduler-no-preload-171807" is "Ready"
	I1017 19:41:53.487258  736846 pod_ready.go:86] duration metric: took 399.594972ms for pod "kube-scheduler-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:41:53.487275  736846 pod_ready.go:40] duration metric: took 34.908550348s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:41:53.538588  736846 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 19:41:53.540718  736846 out.go:179] * Done! kubectl is now configured to use "no-preload-171807" cluster and "default" namespace by default
	I1017 19:41:51.085768  745903 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-112878:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (6.398874828s)
	I1017 19:41:51.085802  745903 kic.go:203] duration metric: took 6.39906432s to extract preloaded images to volume ...
	W1017 19:41:51.085917  745903 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1017 19:41:51.085964  745903 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1017 19:41:51.086010  745903 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 19:41:51.154221  745903 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-112878 --name default-k8s-diff-port-112878 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-112878 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-112878 --network default-k8s-diff-port-112878 --ip 192.168.85.2 --volume default-k8s-diff-port-112878:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 19:41:51.476374  745903 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-112878 --format={{.State.Running}}
	I1017 19:41:51.495224  745903 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-112878 --format={{.State.Status}}
	I1017 19:41:51.516085  745903 cli_runner.go:164] Run: docker exec default-k8s-diff-port-112878 stat /var/lib/dpkg/alternatives/iptables
	I1017 19:41:51.565342  745903 oci.go:144] the created container "default-k8s-diff-port-112878" has a running status.
	I1017 19:41:51.565382  745903 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21753-492109/.minikube/machines/default-k8s-diff-port-112878/id_rsa...
	I1017 19:41:51.995880  745903 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21753-492109/.minikube/machines/default-k8s-diff-port-112878/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 19:41:52.027720  745903 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-112878 --format={{.State.Status}}
	I1017 19:41:52.047600  745903 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 19:41:52.047629  745903 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-112878 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 19:41:52.096841  745903 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-112878 --format={{.State.Status}}
	I1017 19:41:52.117434  745903 machine.go:93] provisionDockerMachine start ...
	I1017 19:41:52.117526  745903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-112878
	I1017 19:41:52.136510  745903 main.go:141] libmachine: Using SSH client type: native
	I1017 19:41:52.136815  745903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1017 19:41:52.136833  745903 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:41:52.277302  745903 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-112878
	
	I1017 19:41:52.277339  745903 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-112878"
	I1017 19:41:52.277449  745903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-112878
	I1017 19:41:52.296491  745903 main.go:141] libmachine: Using SSH client type: native
	I1017 19:41:52.296784  745903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1017 19:41:52.296802  745903 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-112878 && echo "default-k8s-diff-port-112878" | sudo tee /etc/hostname
	I1017 19:41:52.443089  745903 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-112878
	
	I1017 19:41:52.443176  745903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-112878
	I1017 19:41:52.461477  745903 main.go:141] libmachine: Using SSH client type: native
	I1017 19:41:52.461743  745903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1017 19:41:52.461767  745903 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-112878' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-112878/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-112878' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:41:52.602406  745903 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:41:52.602442  745903 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-492109/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-492109/.minikube}
	I1017 19:41:52.602474  745903 ubuntu.go:190] setting up certificates
	I1017 19:41:52.602491  745903 provision.go:84] configureAuth start
	I1017 19:41:52.602561  745903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-112878
	I1017 19:41:52.620813  745903 provision.go:143] copyHostCerts
	I1017 19:41:52.620894  745903 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem, removing ...
	I1017 19:41:52.620911  745903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem
	I1017 19:41:52.621003  745903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem (1123 bytes)
	I1017 19:41:52.621180  745903 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem, removing ...
	I1017 19:41:52.621197  745903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem
	I1017 19:41:52.621243  745903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem (1679 bytes)
	I1017 19:41:52.621333  745903 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem, removing ...
	I1017 19:41:52.621346  745903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem
	I1017 19:41:52.621380  745903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem (1078 bytes)
	I1017 19:41:52.621453  745903 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-112878 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-112878 localhost minikube]
	I1017 19:41:52.851727  745903 provision.go:177] copyRemoteCerts
	I1017 19:41:52.851800  745903 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:41:52.851850  745903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-112878
	I1017 19:41:52.874134  745903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/default-k8s-diff-port-112878/id_rsa Username:docker}
	I1017 19:41:52.977877  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 19:41:52.999283  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1017 19:41:53.018922  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:41:53.038326  745903 provision.go:87] duration metric: took 435.813371ms to configureAuth
	I1017 19:41:53.038362  745903 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:41:53.038589  745903 config.go:182] Loaded profile config "default-k8s-diff-port-112878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:41:53.038783  745903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-112878
	I1017 19:41:53.057284  745903 main.go:141] libmachine: Using SSH client type: native
	I1017 19:41:53.057586  745903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1017 19:41:53.057609  745903 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:41:53.319990  745903 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:41:53.320021  745903 machine.go:96] duration metric: took 1.202562107s to provisionDockerMachine
	I1017 19:41:53.320034  745903 client.go:171] duration metric: took 9.440459566s to LocalClient.Create
	I1017 19:41:53.320053  745903 start.go:167] duration metric: took 9.440540224s to libmachine.API.Create "default-k8s-diff-port-112878"
	I1017 19:41:53.320061  745903 start.go:293] postStartSetup for "default-k8s-diff-port-112878" (driver="docker")
	I1017 19:41:53.320071  745903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:41:53.320133  745903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:41:53.320188  745903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-112878
	I1017 19:41:53.338483  745903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/default-k8s-diff-port-112878/id_rsa Username:docker}
	I1017 19:41:53.439730  745903 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:41:53.443610  745903 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:41:53.443641  745903 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:41:53.443657  745903 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/addons for local assets ...
	I1017 19:41:53.443741  745903 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/files for local assets ...
	I1017 19:41:53.443855  745903 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem -> 4957252.pem in /etc/ssl/certs
	I1017 19:41:53.443985  745903 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:41:53.452577  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem --> /etc/ssl/certs/4957252.pem (1708 bytes)
	I1017 19:41:53.476283  745903 start.go:296] duration metric: took 156.204633ms for postStartSetup
	I1017 19:41:53.476716  745903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-112878
	I1017 19:41:53.497175  745903 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/config.json ...
	I1017 19:41:53.497496  745903 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:41:53.497548  745903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-112878
	I1017 19:41:53.517985  745903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/default-k8s-diff-port-112878/id_rsa Username:docker}
	W1017 19:41:49.175662  741107 pod_ready.go:104] pod "coredns-66bc5c9577-v8hls" is not "Ready", error: <nil>
	W1017 19:41:51.474827  741107 pod_ready.go:104] pod "coredns-66bc5c9577-v8hls" is not "Ready", error: <nil>
	I1017 19:41:53.618954  745903 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:41:53.624582  745903 start.go:128] duration metric: took 9.74918919s to createHost
	I1017 19:41:53.624613  745903 start.go:83] releasing machines lock for "default-k8s-diff-port-112878", held for 9.749362486s
	I1017 19:41:53.624700  745903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-112878
	I1017 19:41:53.644954  745903 ssh_runner.go:195] Run: cat /version.json
	I1017 19:41:53.645035  745903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-112878
	I1017 19:41:53.645049  745903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:41:53.645150  745903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-112878
	I1017 19:41:53.666211  745903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/default-k8s-diff-port-112878/id_rsa Username:docker}
	I1017 19:41:53.666376  745903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/default-k8s-diff-port-112878/id_rsa Username:docker}
	I1017 19:41:53.761670  745903 ssh_runner.go:195] Run: systemctl --version
	I1017 19:41:53.841749  745903 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:41:53.889911  745903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:41:53.895715  745903 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:41:53.895780  745903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:41:53.927529  745903 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1017 19:41:53.927559  745903 start.go:495] detecting cgroup driver to use...
	I1017 19:41:53.927599  745903 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 19:41:53.927656  745903 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:41:53.948218  745903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:41:53.964126  745903 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:41:53.964195  745903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:41:53.986091  745903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:41:54.008375  745903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:41:54.110900  745903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:41:54.213981  745903 docker.go:234] disabling docker service ...
	I1017 19:41:54.214056  745903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:41:54.236247  745903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:41:54.250718  745903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:41:54.348852  745903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:41:54.442317  745903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:41:54.456550  745903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:41:54.473516  745903 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:41:54.473584  745903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:54.485020  745903 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 19:41:54.485084  745903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:54.495139  745903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:54.504821  745903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:54.515116  745903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:41:54.524701  745903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:54.534442  745903 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:54.549528  745903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:54.559315  745903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:41:54.567747  745903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:41:54.576150  745903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:41:54.660865  745903 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:41:54.775480  745903 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:41:54.775545  745903 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:41:54.779760  745903 start.go:563] Will wait 60s for crictl version
	I1017 19:41:54.779828  745903 ssh_runner.go:195] Run: which crictl
	I1017 19:41:54.783936  745903 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:41:54.810648  745903 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:41:54.810758  745903 ssh_runner.go:195] Run: crio --version
	I1017 19:41:54.842382  745903 ssh_runner.go:195] Run: crio --version
	I1017 19:41:54.875317  745903 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:41:53.782830  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:41:53.783314  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:41:53.783390  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:41:53.783463  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:41:53.815737  696997 cri.go:89] found id: "5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:53.815765  696997 cri.go:89] found id: ""
	I1017 19:41:53.815774  696997 logs.go:282] 1 containers: [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690]
	I1017 19:41:53.815837  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:53.820816  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:41:53.820891  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:41:53.853452  696997 cri.go:89] found id: ""
	I1017 19:41:53.853485  696997 logs.go:282] 0 containers: []
	W1017 19:41:53.853498  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:41:53.853506  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:41:53.853585  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:41:53.887481  696997 cri.go:89] found id: ""
	I1017 19:41:53.887516  696997 logs.go:282] 0 containers: []
	W1017 19:41:53.887528  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:41:53.887536  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:41:53.887620  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:41:53.922787  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:53.922816  696997 cri.go:89] found id: ""
	I1017 19:41:53.922826  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:41:53.922887  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:53.927864  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:41:53.927932  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:41:53.961456  696997 cri.go:89] found id: ""
	I1017 19:41:53.961486  696997 logs.go:282] 0 containers: []
	W1017 19:41:53.961497  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:41:53.961505  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:41:53.961571  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:41:53.995706  696997 cri.go:89] found id: "97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:41:53.995735  696997 cri.go:89] found id: "ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:53.995741  696997 cri.go:89] found id: ""
	I1017 19:41:53.995753  696997 logs.go:282] 2 containers: [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770]
	I1017 19:41:53.995825  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:54.000608  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:54.005044  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:41:54.005111  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:41:54.042906  696997 cri.go:89] found id: ""
	I1017 19:41:54.042941  696997 logs.go:282] 0 containers: []
	W1017 19:41:54.042953  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:41:54.042961  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:41:54.043023  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:41:54.080357  696997 cri.go:89] found id: ""
	I1017 19:41:54.080385  696997 logs.go:282] 0 containers: []
	W1017 19:41:54.080397  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:41:54.080419  696997 logs.go:123] Gathering logs for kube-apiserver [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690] ...
	I1017 19:41:54.080435  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:54.118697  696997 logs.go:123] Gathering logs for kube-controller-manager [ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770] ...
	I1017 19:41:54.118728  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	W1017 19:41:54.147807  696997 logs.go:130] failed kube-controller-manager [ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:41:54.145094    6966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770\": container with ID starting with ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770 not found: ID does not exist" containerID="ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	time="2025-10-17T19:41:54Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770\": container with ID starting with ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1017 19:41:54.145094    6966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770\": container with ID starting with ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770 not found: ID does not exist" containerID="ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	time="2025-10-17T19:41:54Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770\": container with ID starting with ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770 not found: ID does not exist"
	
	** /stderr **
	I1017 19:41:54.147835  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:41:54.147851  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:41:54.215314  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:41:54.215372  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:41:54.233067  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:41:54.233105  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:41:54.305009  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:41:54.305030  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:41:54.305045  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:54.363612  696997 logs.go:123] Gathering logs for kube-controller-manager [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe] ...
	I1017 19:41:54.363650  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:41:54.402367  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:41:54.402396  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:41:54.436144  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:41:54.436173  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:41:57.034175  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:41:57.034671  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:41:57.034768  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:41:57.034836  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:41:57.065022  696997 cri.go:89] found id: "5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:57.065048  696997 cri.go:89] found id: ""
	I1017 19:41:57.065059  696997 logs.go:282] 1 containers: [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690]
	I1017 19:41:57.065122  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:57.069618  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:41:57.069719  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:41:57.098020  696997 cri.go:89] found id: ""
	I1017 19:41:57.098045  696997 logs.go:282] 0 containers: []
	W1017 19:41:57.098053  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:41:57.098060  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:41:57.098122  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:41:57.127769  696997 cri.go:89] found id: ""
	I1017 19:41:57.127793  696997 logs.go:282] 0 containers: []
	W1017 19:41:57.127801  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:41:57.127808  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:41:57.127957  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:41:57.159935  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:57.159960  696997 cri.go:89] found id: ""
	I1017 19:41:57.159971  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:41:57.160033  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:57.164577  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:41:57.164652  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:41:57.195419  696997 cri.go:89] found id: ""
	I1017 19:41:57.195448  696997 logs.go:282] 0 containers: []
	W1017 19:41:57.195460  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:41:57.195476  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:41:57.195545  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:41:57.225635  696997 cri.go:89] found id: "97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:41:57.225655  696997 cri.go:89] found id: ""
	I1017 19:41:57.225663  696997 logs.go:282] 1 containers: [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe]
	I1017 19:41:57.225744  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:57.230083  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:41:57.230152  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:41:57.259600  696997 cri.go:89] found id: ""
	I1017 19:41:57.259625  696997 logs.go:282] 0 containers: []
	W1017 19:41:57.259632  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:41:57.259641  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:41:57.259732  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:41:57.291664  696997 cri.go:89] found id: ""
	I1017 19:41:57.291705  696997 logs.go:282] 0 containers: []
	W1017 19:41:57.291719  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:41:57.291732  696997 logs.go:123] Gathering logs for kube-apiserver [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690] ...
	I1017 19:41:57.291755  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:57.326995  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:41:57.327027  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:57.383885  696997 logs.go:123] Gathering logs for kube-controller-manager [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe] ...
	I1017 19:41:57.383926  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:41:54.876655  745903 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-112878 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:41:54.896020  745903 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1017 19:41:54.900719  745903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:41:54.912420  745903 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-112878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-112878 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:41:54.912551  745903 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:41:54.912619  745903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:41:54.951205  745903 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:41:54.951230  745903 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:41:54.951292  745903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:41:54.982389  745903 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:41:54.982415  745903 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:41:54.982423  745903 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1017 19:41:54.982507  745903 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-112878 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-112878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:41:54.982569  745903 ssh_runner.go:195] Run: crio config
	I1017 19:41:55.030938  745903 cni.go:84] Creating CNI manager for ""
	I1017 19:41:55.030967  745903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:41:55.030987  745903 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:41:55.031011  745903 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-112878 NodeName:default-k8s-diff-port-112878 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:41:55.031131  745903 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-112878"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:41:55.031210  745903 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:41:55.040219  745903 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:41:55.040286  745903 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 19:41:55.048810  745903 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1017 19:41:55.062892  745903 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:41:55.079756  745903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1017 19:41:55.094593  745903 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1017 19:41:55.098876  745903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:41:55.109971  745903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:41:55.196482  745903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:41:55.226525  745903 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878 for IP: 192.168.85.2
	I1017 19:41:55.226551  745903 certs.go:195] generating shared ca certs ...
	I1017 19:41:55.226575  745903 certs.go:227] acquiring lock for ca certs: {Name:mkc97483d62151ba5c32d923dd19e3e2b3661468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:41:55.226784  745903 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key
	I1017 19:41:55.226831  745903 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key
	I1017 19:41:55.226842  745903 certs.go:257] generating profile certs ...
	I1017 19:41:55.226900  745903 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/client.key
	I1017 19:41:55.226921  745903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/client.crt with IP's: []
	I1017 19:41:55.371718  745903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/client.crt ...
	I1017 19:41:55.371749  745903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/client.crt: {Name:mkc6056f4159c9badc3cdb573eca9fad46db65c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:41:55.371927  745903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/client.key ...
	I1017 19:41:55.371940  745903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/client.key: {Name:mk28fb14f4859226ed9121c1a2de1ac3628155bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:41:55.372020  745903 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.key.55092fd4
	I1017 19:41:55.372037  745903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.crt.55092fd4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1017 19:41:55.435601  745903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.crt.55092fd4 ...
	I1017 19:41:55.435634  745903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.crt.55092fd4: {Name:mk0f6162d53fec5018596205793f8f650c48ad99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:41:55.435855  745903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.key.55092fd4 ...
	I1017 19:41:55.435876  745903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.key.55092fd4: {Name:mkd7f4773df571e9a40ca5fa7833cc5056f2efda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:41:55.435962  745903 certs.go:382] copying /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.crt.55092fd4 -> /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.crt
	I1017 19:41:55.436039  745903 certs.go:386] copying /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.key.55092fd4 -> /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.key
	I1017 19:41:55.436107  745903 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/proxy-client.key
	I1017 19:41:55.436123  745903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/proxy-client.crt with IP's: []
	I1017 19:41:55.750469  745903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/proxy-client.crt ...
	I1017 19:41:55.750502  745903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/proxy-client.crt: {Name:mk71b1e23b81cc0ebbc0dffc742665d19c9879b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:41:55.750700  745903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/proxy-client.key ...
	I1017 19:41:55.750714  745903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/proxy-client.key: {Name:mk61aa2c336ce37f90e9cb643e557e29ac524333 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:41:55.750907  745903 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725.pem (1338 bytes)
	W1017 19:41:55.750947  745903 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725_empty.pem, impossibly tiny 0 bytes
	I1017 19:41:55.750956  745903 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 19:41:55.750988  745903 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem (1078 bytes)
	I1017 19:41:55.751010  745903 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:41:55.751033  745903 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem (1679 bytes)
	I1017 19:41:55.751079  745903 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem (1708 bytes)
	I1017 19:41:55.751671  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:41:55.773443  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:41:55.793202  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:41:55.812967  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 19:41:55.832311  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 19:41:55.851598  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 19:41:55.871256  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:41:55.890758  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:41:55.910393  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725.pem --> /usr/share/ca-certificates/495725.pem (1338 bytes)
	I1017 19:41:55.933242  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem --> /usr/share/ca-certificates/4957252.pem (1708 bytes)
	I1017 19:41:55.954814  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:41:55.974799  745903 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:41:55.988801  745903 ssh_runner.go:195] Run: openssl version
	I1017 19:41:55.995437  745903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/495725.pem && ln -fs /usr/share/ca-certificates/495725.pem /etc/ssl/certs/495725.pem"
	I1017 19:41:56.005322  745903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/495725.pem
	I1017 19:41:56.009540  745903 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/495725.pem
	I1017 19:41:56.009604  745903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/495725.pem
	I1017 19:41:56.044653  745903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/495725.pem /etc/ssl/certs/51391683.0"
	I1017 19:41:56.054453  745903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4957252.pem && ln -fs /usr/share/ca-certificates/4957252.pem /etc/ssl/certs/4957252.pem"
	I1017 19:41:56.064567  745903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4957252.pem
	I1017 19:41:56.068931  745903 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/4957252.pem
	I1017 19:41:56.068984  745903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4957252.pem
	I1017 19:41:56.105262  745903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4957252.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:41:56.115165  745903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:41:56.124644  745903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:41:56.128961  745903 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:41:56.129036  745903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:41:56.164429  745903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:41:56.174208  745903 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:41:56.178436  745903 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 19:41:56.178496  745903 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-112878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-112878 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:41:56.178587  745903 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:41:56.178657  745903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:41:56.208824  745903 cri.go:89] found id: ""
	I1017 19:41:56.208892  745903 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 19:41:56.217790  745903 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 19:41:56.226793  745903 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 19:41:56.226866  745903 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 19:41:56.235617  745903 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 19:41:56.235640  745903 kubeadm.go:157] found existing configuration files:
	
	I1017 19:41:56.235703  745903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1017 19:41:56.244591  745903 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 19:41:56.244645  745903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 19:41:56.253278  745903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1017 19:41:56.262226  745903 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 19:41:56.262315  745903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 19:41:56.270623  745903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1017 19:41:56.279505  745903 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 19:41:56.279569  745903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 19:41:56.287938  745903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1017 19:41:56.296709  745903 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 19:41:56.296788  745903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 19:41:56.305288  745903 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 19:41:56.371823  745903 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1017 19:41:56.434893  745903 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1017 19:41:53.979823  741107 pod_ready.go:104] pod "coredns-66bc5c9577-v8hls" is not "Ready", error: <nil>
	W1017 19:41:56.473888  741107 pod_ready.go:104] pod "coredns-66bc5c9577-v8hls" is not "Ready", error: <nil>
	W1017 19:41:58.474498  741107 pod_ready.go:104] pod "coredns-66bc5c9577-v8hls" is not "Ready", error: <nil>
	I1017 19:41:57.416632  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:41:57.416670  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:41:57.487169  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:41:57.487222  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:41:57.520668  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:41:57.520717  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:41:57.619586  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:41:57.619629  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:41:57.637960  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:41:57.638000  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:41:57.700226  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:42:00.200846  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:42:00.201381  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:42:00.201441  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:42:00.201491  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:42:00.233549  696997 cri.go:89] found id: "5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:42:00.233570  696997 cri.go:89] found id: ""
	I1017 19:42:00.233578  696997 logs.go:282] 1 containers: [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690]
	I1017 19:42:00.233637  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:42:00.238003  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:42:00.238084  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:42:00.269145  696997 cri.go:89] found id: ""
	I1017 19:42:00.269181  696997 logs.go:282] 0 containers: []
	W1017 19:42:00.269192  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:42:00.269203  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:42:00.269260  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:42:00.300539  696997 cri.go:89] found id: ""
	I1017 19:42:00.300571  696997 logs.go:282] 0 containers: []
	W1017 19:42:00.300583  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:42:00.300591  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:42:00.300656  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:42:00.332263  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:42:00.332299  696997 cri.go:89] found id: ""
	I1017 19:42:00.332308  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:42:00.332383  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:42:00.336968  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:42:00.337034  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:42:00.368223  696997 cri.go:89] found id: ""
	I1017 19:42:00.368250  696997 logs.go:282] 0 containers: []
	W1017 19:42:00.368262  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:42:00.368270  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:42:00.368339  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:42:00.398184  696997 cri.go:89] found id: "97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:42:00.398210  696997 cri.go:89] found id: ""
	I1017 19:42:00.398220  696997 logs.go:282] 1 containers: [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe]
	I1017 19:42:00.398283  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:42:00.402968  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:42:00.403040  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:42:00.438191  696997 cri.go:89] found id: ""
	I1017 19:42:00.438220  696997 logs.go:282] 0 containers: []
	W1017 19:42:00.438232  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:42:00.438239  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:42:00.438303  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:42:00.469917  696997 cri.go:89] found id: ""
	I1017 19:42:00.469963  696997 logs.go:282] 0 containers: []
	W1017 19:42:00.469975  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:42:00.469987  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:42:00.470002  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:42:00.532636  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:42:00.532675  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:42:00.570627  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:42:00.570659  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:42:00.665335  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:42:00.665379  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:42:00.683771  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:42:00.683814  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:42:00.751728  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:42:00.751755  696997 logs.go:123] Gathering logs for kube-apiserver [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690] ...
	I1017 19:42:00.751770  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:42:00.787331  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:42:00.787374  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:42:00.848156  696997 logs.go:123] Gathering logs for kube-controller-manager [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe] ...
	I1017 19:42:00.848199  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	W1017 19:42:00.974385  741107 pod_ready.go:104] pod "coredns-66bc5c9577-v8hls" is not "Ready", error: <nil>
	W1017 19:42:02.974546  741107 pod_ready.go:104] pod "coredns-66bc5c9577-v8hls" is not "Ready", error: <nil>
	I1017 19:42:03.380762  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:42:03.381177  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:42:03.381229  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:42:03.381282  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:42:03.416064  696997 cri.go:89] found id: "5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:42:03.416089  696997 cri.go:89] found id: ""
	I1017 19:42:03.416101  696997 logs.go:282] 1 containers: [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690]
	I1017 19:42:03.416165  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:42:03.421437  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:42:03.421636  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:42:03.455831  696997 cri.go:89] found id: ""
	I1017 19:42:03.455859  696997 logs.go:282] 0 containers: []
	W1017 19:42:03.455867  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:42:03.455873  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:42:03.455931  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:42:03.491585  696997 cri.go:89] found id: ""
	I1017 19:42:03.491617  696997 logs.go:282] 0 containers: []
	W1017 19:42:03.491721  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:42:03.491730  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:42:03.491892  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:42:03.525823  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:42:03.525850  696997 cri.go:89] found id: ""
	I1017 19:42:03.525862  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:42:03.525935  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:42:03.530424  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:42:03.530499  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:42:03.565584  696997 cri.go:89] found id: ""
	I1017 19:42:03.565612  696997 logs.go:282] 0 containers: []
	W1017 19:42:03.565628  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:42:03.565636  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:42:03.565707  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:42:03.602994  696997 cri.go:89] found id: "97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:42:03.603016  696997 cri.go:89] found id: ""
	I1017 19:42:03.603026  696997 logs.go:282] 1 containers: [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe]
	I1017 19:42:03.603086  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:42:03.608187  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:42:03.608259  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:42:03.643990  696997 cri.go:89] found id: ""
	I1017 19:42:03.644021  696997 logs.go:282] 0 containers: []
	W1017 19:42:03.644043  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:42:03.644052  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:42:03.644141  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:42:03.676998  696997 cri.go:89] found id: ""
	I1017 19:42:03.677026  696997 logs.go:282] 0 containers: []
	W1017 19:42:03.677037  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:42:03.677049  696997 logs.go:123] Gathering logs for kube-apiserver [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690] ...
	I1017 19:42:03.677064  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:42:03.720739  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:42:03.720777  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:42:03.794628  696997 logs.go:123] Gathering logs for kube-controller-manager [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe] ...
	I1017 19:42:03.794672  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:42:03.830773  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:42:03.830801  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:42:03.914552  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:42:03.914594  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:42:03.950023  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:42:03.950063  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:42:04.072786  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:42:04.072823  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:42:04.091894  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:42:04.091939  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:42:04.160835  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:42:06.661775  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:42:06.662352  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:42:06.662432  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:42:06.662502  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:42:06.702042  696997 cri.go:89] found id: "5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:42:06.702068  696997 cri.go:89] found id: ""
	I1017 19:42:06.702079  696997 logs.go:282] 1 containers: [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690]
	I1017 19:42:06.702156  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:42:06.707811  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:42:06.707894  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:42:06.755533  696997 cri.go:89] found id: ""
	I1017 19:42:06.755650  696997 logs.go:282] 0 containers: []
	W1017 19:42:06.755730  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:42:06.755743  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:42:06.755808  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:42:06.787076  696997 cri.go:89] found id: ""
	I1017 19:42:06.787101  696997 logs.go:282] 0 containers: []
	W1017 19:42:06.787111  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:42:06.787118  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:42:06.787180  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:42:06.817208  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:42:06.817235  696997 cri.go:89] found id: ""
	I1017 19:42:06.817246  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:42:06.817313  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:42:06.822980  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:42:06.823058  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:42:06.857086  696997 cri.go:89] found id: ""
	I1017 19:42:06.857125  696997 logs.go:282] 0 containers: []
	W1017 19:42:06.857135  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:42:06.857145  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:42:06.857210  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:42:06.892760  696997 cri.go:89] found id: "97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:42:06.892784  696997 cri.go:89] found id: ""
	I1017 19:42:06.892793  696997 logs.go:282] 1 containers: [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe]
	I1017 19:42:06.892854  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:42:06.898140  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:42:06.898218  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:42:06.930132  696997 cri.go:89] found id: ""
	I1017 19:42:06.930157  696997 logs.go:282] 0 containers: []
	W1017 19:42:06.930167  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:42:06.930173  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:42:06.930229  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:42:06.958873  696997 cri.go:89] found id: ""
	I1017 19:42:06.958900  696997 logs.go:282] 0 containers: []
	W1017 19:42:06.958908  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:42:06.958919  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:42:06.958932  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:42:07.053179  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:42:07.053221  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:42:07.072919  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:42:07.072954  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:42:07.135798  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:42:07.135826  696997 logs.go:123] Gathering logs for kube-apiserver [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690] ...
	I1017 19:42:07.135844  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:42:07.177675  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:42:07.177730  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:42:07.237968  696997 logs.go:123] Gathering logs for kube-controller-manager [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe] ...
	I1017 19:42:07.238010  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:42:07.267890  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:42:07.267928  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:42:07.334560  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:42:07.334603  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:42:07.480427  745903 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 19:42:07.480500  745903 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 19:42:07.480646  745903 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 19:42:07.480769  745903 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1017 19:42:07.480883  745903 kubeadm.go:318] OS: Linux
	I1017 19:42:07.480966  745903 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 19:42:07.481049  745903 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 19:42:07.481135  745903 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 19:42:07.481229  745903 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 19:42:07.481283  745903 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 19:42:07.481343  745903 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 19:42:07.481408  745903 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 19:42:07.481461  745903 kubeadm.go:318] CGROUPS_IO: enabled
	I1017 19:42:07.481559  745903 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 19:42:07.481733  745903 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 19:42:07.481858  745903 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 19:42:07.481933  745903 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 19:42:07.487731  745903 out.go:252]   - Generating certificates and keys ...
	I1017 19:42:07.487831  745903 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 19:42:07.487894  745903 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 19:42:07.487952  745903 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 19:42:07.487998  745903 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 19:42:07.488081  745903 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 19:42:07.488173  745903 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 19:42:07.488252  745903 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 19:42:07.488403  745903 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-112878 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1017 19:42:07.488495  745903 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 19:42:07.488637  745903 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-112878 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1017 19:42:07.488785  745903 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 19:42:07.488883  745903 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 19:42:07.488952  745903 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 19:42:07.489028  745903 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 19:42:07.489105  745903 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 19:42:07.489189  745903 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 19:42:07.489281  745903 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 19:42:07.489417  745903 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 19:42:07.489506  745903 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 19:42:07.489604  745903 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 19:42:07.489727  745903 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 19:42:07.491207  745903 out.go:252]   - Booting up control plane ...
	I1017 19:42:07.491333  745903 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 19:42:07.491450  745903 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 19:42:07.491547  745903 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 19:42:07.491668  745903 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 19:42:07.491809  745903 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 19:42:07.491954  745903 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 19:42:07.492096  745903 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 19:42:07.492186  745903 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 19:42:07.492374  745903 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 19:42:07.492492  745903 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 19:42:07.492559  745903 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001789962s
	I1017 19:42:07.492650  745903 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 19:42:07.492758  745903 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1017 19:42:07.492900  745903 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 19:42:07.493019  745903 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 19:42:07.493146  745903 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.218966764s
	I1017 19:42:07.493240  745903 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.292476669s
	I1017 19:42:07.493334  745903 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001732588s
	I1017 19:42:07.493484  745903 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 19:42:07.493658  745903 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 19:42:07.493741  745903 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 19:42:07.494015  745903 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-112878 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 19:42:07.494097  745903 kubeadm.go:318] [bootstrap-token] Using token: d7re57.2kq1vkaf70o3u8fc
	I1017 19:42:07.496663  745903 out.go:252]   - Configuring RBAC rules ...
	I1017 19:42:07.496822  745903 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 19:42:07.496952  745903 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 19:42:07.497127  745903 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 19:42:07.497293  745903 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 19:42:07.497434  745903 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 19:42:07.497529  745903 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 19:42:07.497636  745903 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 19:42:07.497673  745903 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 19:42:07.497738  745903 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 19:42:07.497745  745903 kubeadm.go:318] 
	I1017 19:42:07.497800  745903 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 19:42:07.497805  745903 kubeadm.go:318] 
	I1017 19:42:07.497877  745903 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 19:42:07.497886  745903 kubeadm.go:318] 
	I1017 19:42:07.497915  745903 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 19:42:07.497963  745903 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 19:42:07.498008  745903 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 19:42:07.498014  745903 kubeadm.go:318] 
	I1017 19:42:07.498074  745903 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 19:42:07.498091  745903 kubeadm.go:318] 
	I1017 19:42:07.498142  745903 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 19:42:07.498149  745903 kubeadm.go:318] 
	I1017 19:42:07.498217  745903 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 19:42:07.498319  745903 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 19:42:07.498387  745903 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 19:42:07.498396  745903 kubeadm.go:318] 
	I1017 19:42:07.498521  745903 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 19:42:07.498639  745903 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 19:42:07.498652  745903 kubeadm.go:318] 
	I1017 19:42:07.498768  745903 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token d7re57.2kq1vkaf70o3u8fc \
	I1017 19:42:07.498911  745903 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ae4b222593b9932ac318f80ad834fe09d4c8ed481133016b5c410bf2757b648e \
	I1017 19:42:07.498947  745903 kubeadm.go:318] 	--control-plane 
	I1017 19:42:07.498954  745903 kubeadm.go:318] 
	I1017 19:42:07.499077  745903 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 19:42:07.499089  745903 kubeadm.go:318] 
	I1017 19:42:07.499192  745903 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token d7re57.2kq1vkaf70o3u8fc \
	I1017 19:42:07.499349  745903 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ae4b222593b9932ac318f80ad834fe09d4c8ed481133016b5c410bf2757b648e 
	I1017 19:42:07.499370  745903 cni.go:84] Creating CNI manager for ""
	I1017 19:42:07.499389  745903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:42:07.500922  745903 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Oct 17 19:41:28 no-preload-171807 crio[562]: time="2025-10-17T19:41:28.753630167Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:41:28 no-preload-171807 crio[562]: time="2025-10-17T19:41:28.757469095Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:41:28 no-preload-171807 crio[562]: time="2025-10-17T19:41:28.757604416Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:41:44 no-preload-171807 crio[562]: time="2025-10-17T19:41:44.915216435Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6a123bd3-3d4a-4829-b3ea-4facadcb8e5b name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:41:44 no-preload-171807 crio[562]: time="2025-10-17T19:41:44.916250457Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f4157fbf-cae7-4b56-82f5-f4ba7407713e name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:41:44 no-preload-171807 crio[562]: time="2025-10-17T19:41:44.917551761Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fqmgm/dashboard-metrics-scraper" id=69060cb6-aa49-47c5-9790-8344f08a7e8f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:41:44 no-preload-171807 crio[562]: time="2025-10-17T19:41:44.918045413Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:44 no-preload-171807 crio[562]: time="2025-10-17T19:41:44.92633991Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:44 no-preload-171807 crio[562]: time="2025-10-17T19:41:44.927069665Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:44 no-preload-171807 crio[562]: time="2025-10-17T19:41:44.961091931Z" level=info msg="Created container b00a978ba6c2ba2beea9f7bc631934a305976c12436f5a13772cbbabda6c49c3: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fqmgm/dashboard-metrics-scraper" id=69060cb6-aa49-47c5-9790-8344f08a7e8f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:41:44 no-preload-171807 crio[562]: time="2025-10-17T19:41:44.961973346Z" level=info msg="Starting container: b00a978ba6c2ba2beea9f7bc631934a305976c12436f5a13772cbbabda6c49c3" id=d9ddb5b3-9ab7-49bc-9e30-328bfec29f47 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:41:44 no-preload-171807 crio[562]: time="2025-10-17T19:41:44.96436027Z" level=info msg="Started container" PID=1736 containerID=b00a978ba6c2ba2beea9f7bc631934a305976c12436f5a13772cbbabda6c49c3 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fqmgm/dashboard-metrics-scraper id=d9ddb5b3-9ab7-49bc-9e30-328bfec29f47 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0b02da81e7c2c9638d359df7ded04ecb871bc8ba874aba03f04dfc084e0d1351
	Oct 17 19:41:45 no-preload-171807 crio[562]: time="2025-10-17T19:41:45.024277135Z" level=info msg="Removing container: e4f1663501bd0ce9b6a40d5275c3f5abd2e7c8066a89e0b36bda675dda265af6" id=ad2349d0-812e-4a15-9dfd-dfd9b363f4c6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:41:45 no-preload-171807 crio[562]: time="2025-10-17T19:41:45.037792491Z" level=info msg="Removed container e4f1663501bd0ce9b6a40d5275c3f5abd2e7c8066a89e0b36bda675dda265af6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fqmgm/dashboard-metrics-scraper" id=ad2349d0-812e-4a15-9dfd-dfd9b363f4c6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:41:49 no-preload-171807 crio[562]: time="2025-10-17T19:41:49.037597627Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=682d3c90-1fdb-4ef1-b0b1-f6b535c801a8 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:41:49 no-preload-171807 crio[562]: time="2025-10-17T19:41:49.047258701Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f0fbbc59-41c9-4b95-903f-ac4d37e32730 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:41:49 no-preload-171807 crio[562]: time="2025-10-17T19:41:49.048612317Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=3bf160af-457b-42f9-8d04-086483f9a2f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:41:49 no-preload-171807 crio[562]: time="2025-10-17T19:41:49.048953943Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:49 no-preload-171807 crio[562]: time="2025-10-17T19:41:49.19697977Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:49 no-preload-171807 crio[562]: time="2025-10-17T19:41:49.197238277Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/fe78c967bdeef54f05cabed306b6c2e83ed2d4b57c4db159f9a4d6f801a2fe5e/merged/etc/passwd: no such file or directory"
	Oct 17 19:41:49 no-preload-171807 crio[562]: time="2025-10-17T19:41:49.197286118Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fe78c967bdeef54f05cabed306b6c2e83ed2d4b57c4db159f9a4d6f801a2fe5e/merged/etc/group: no such file or directory"
	Oct 17 19:41:49 no-preload-171807 crio[562]: time="2025-10-17T19:41:49.19762322Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:49 no-preload-171807 crio[562]: time="2025-10-17T19:41:49.376262586Z" level=info msg="Created container c38fc3e3e5753ebdf5ea7669f5ca235a915e6ca85e02b4d3a1dd0a1412bfb0b3: kube-system/storage-provisioner/storage-provisioner" id=3bf160af-457b-42f9-8d04-086483f9a2f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:41:49 no-preload-171807 crio[562]: time="2025-10-17T19:41:49.377162361Z" level=info msg="Starting container: c38fc3e3e5753ebdf5ea7669f5ca235a915e6ca85e02b4d3a1dd0a1412bfb0b3" id=65be344d-3062-446e-84b7-69c701db9621 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:41:49 no-preload-171807 crio[562]: time="2025-10-17T19:41:49.379658913Z" level=info msg="Started container" PID=1753 containerID=c38fc3e3e5753ebdf5ea7669f5ca235a915e6ca85e02b4d3a1dd0a1412bfb0b3 description=kube-system/storage-provisioner/storage-provisioner id=65be344d-3062-446e-84b7-69c701db9621 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8f38c050ba6d27fcc5fa6891101af37d594cff3163b1d7c83d289fb378b7d590
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c38fc3e3e5753       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   8f38c050ba6d2       storage-provisioner                          kube-system
	b00a978ba6c2b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   0b02da81e7c2c       dashboard-metrics-scraper-6ffb444bf9-fqmgm   kubernetes-dashboard
	e35ca6f1c73b7       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   24042a14154aa       kubernetes-dashboard-855c9754f9-4kqlp        kubernetes-dashboard
	e92d1fe44275c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   7ed947a361ffd       busybox                                      default
	835887455a526       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   32357fe29053d       coredns-66bc5c9577-gnx5k                     kube-system
	a2184126b0f26       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   8f38c050ba6d2       storage-provisioner                          kube-system
	d022a76c654d2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   8c239914db371       kindnet-tk5hv                                kube-system
	8604f98158605       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   0e7c7331e3d42       kube-proxy-cdbjg                             kube-system
	d86dd76d8b3bd       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   218302f0d68d3       kube-controller-manager-no-preload-171807    kube-system
	2c72f7d2bb251       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   055045014b278       kube-apiserver-no-preload-171807             kube-system
	2e00090e4a67b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   d6e37d3017f66       etcd-no-preload-171807                       kube-system
	3c4af638c6379       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   5db8b2a72a08c       kube-scheduler-no-preload-171807             kube-system
	
	
	==> coredns [835887455a526598d2d867876cd5a46611eab57d28140e1ba67e9ee8f72601e5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59993 - 44391 "HINFO IN 1845336670314001142.1568016722406365941. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.489644275s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-171807
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-171807
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=no-preload-171807
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_40_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:40:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-171807
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:41:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:41:58 +0000   Fri, 17 Oct 2025 19:40:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:41:58 +0000   Fri, 17 Oct 2025 19:40:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:41:58 +0000   Fri, 17 Oct 2025 19:40:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:41:58 +0000   Fri, 17 Oct 2025 19:40:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-171807
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                4a402992-3a00-457b-a9c9-3f38efedf1af
	  Boot ID:                    c8616e78-d085-41cd-a329-f2bcfd9cfa15
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-gnx5k                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-no-preload-171807                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-tk5hv                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-no-preload-171807              250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-no-preload-171807     200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-cdbjg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-no-preload-171807              100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fqmgm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4kqlp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  111s               kubelet          Node no-preload-171807 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s               kubelet          Node no-preload-171807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s               kubelet          Node no-preload-171807 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s               node-controller  Node no-preload-171807 event: Registered Node no-preload-171807 in Controller
	  Normal  NodeReady                91s                kubelet          Node no-preload-171807 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 54s)  kubelet          Node no-preload-171807 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 54s)  kubelet          Node no-preload-171807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 54s)  kubelet          Node no-preload-171807 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node no-preload-171807 event: Registered Node no-preload-171807 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 d1 49 91 03 c2 08 06
	[  +0.000804] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 a9 2b 44 da ae 08 06
	[Oct17 18:59] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022229] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023876] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.024898] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +2.047801] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +4.031525] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:00] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +16.382262] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +32.252567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	
	
	==> etcd [2e00090e4a67b40ac53e71a16e43401493b444c9846af2e602339d93281be030] <==
	{"level":"warn","ts":"2025-10-17T19:41:16.351123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.358584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.366972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.374511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.403842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.411190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.418510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.425461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.432449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.447278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.453933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.466042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.474537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.481857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.495598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.504086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.511805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.558700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59624","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T19:41:48.241119Z","caller":"traceutil/trace.go:172","msg":"trace[931953741] transaction","detail":"{read_only:false; response_revision:614; number_of_response:1; }","duration":"148.453777ms","start":"2025-10-17T19:41:48.092610Z","end":"2025-10-17T19:41:48.241064Z","steps":["trace[931953741] 'process raft request'  (duration: 148.288402ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:41:49.702071Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.283746ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-gnx5k\" limit:1 ","response":"range_response_count:1 size:5935"}
	{"level":"warn","ts":"2025-10-17T19:41:49.702150Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.224133ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/storage-provisioner.186f5ead6cc90046\" limit:1 ","response":"range_response_count:1 size:764"}
	{"level":"info","ts":"2025-10-17T19:41:49.702178Z","caller":"traceutil/trace.go:172","msg":"trace[47940972] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-gnx5k; range_end:; response_count:1; response_revision:618; }","duration":"117.386244ms","start":"2025-10-17T19:41:49.584777Z","end":"2025-10-17T19:41:49.702163Z","steps":["trace[47940972] 'range keys from in-memory index tree'  (duration: 117.145457ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:41:49.702198Z","caller":"traceutil/trace.go:172","msg":"trace[595834552] range","detail":"{range_begin:/registry/events/kube-system/storage-provisioner.186f5ead6cc90046; range_end:; response_count:1; response_revision:618; }","duration":"118.297058ms","start":"2025-10-17T19:41:49.583888Z","end":"2025-10-17T19:41:49.702185Z","steps":["trace[595834552] 'range keys from in-memory index tree'  (duration: 118.072243ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:41:49.702053Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.802737ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T19:41:49.702332Z","caller":"traceutil/trace.go:172","msg":"trace[987343703] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:618; }","duration":"185.102736ms","start":"2025-10-17T19:41:49.517210Z","end":"2025-10-17T19:41:49.702313Z","steps":["trace[987343703] 'range keys from in-memory index tree'  (duration: 184.728558ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:42:08 up  3:24,  0 user,  load average: 4.36, 3.44, 2.16
	Linux no-preload-171807 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d022a76c654d2e18ebf220443cc9aab41bb02d48d7f4800b39daf43d8ce2eea1] <==
	I1017 19:41:18.536193       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 19:41:18.536512       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1017 19:41:18.536761       1 main.go:148] setting mtu 1500 for CNI 
	I1017 19:41:18.536782       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 19:41:18.536820       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T19:41:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 19:41:18.736169       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 19:41:18.736237       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 19:41:18.736250       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:41:18.737730       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 19:41:19.137081       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:41:19.137112       1 metrics.go:72] Registering metrics
	I1017 19:41:19.137193       1 controller.go:711] "Syncing nftables rules"
	I1017 19:41:28.736058       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 19:41:28.736105       1 main.go:301] handling current node
	I1017 19:41:38.739036       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 19:41:38.739096       1 main.go:301] handling current node
	I1017 19:41:48.736949       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 19:41:48.737008       1 main.go:301] handling current node
	I1017 19:41:58.736931       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 19:41:58.736986       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2c72f7d2bb251ff207976219245143bbd296d8b6a6495c2e5556d0e9da8f1099] <==
	I1017 19:41:17.068510       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 19:41:17.068650       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1017 19:41:17.069175       1 aggregator.go:171] initial CRD sync complete...
	I1017 19:41:17.069186       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 19:41:17.069193       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 19:41:17.069199       1 cache.go:39] Caches are synced for autoregister controller
	I1017 19:41:17.068514       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 19:41:17.068603       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1017 19:41:17.074993       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 19:41:17.077209       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 19:41:17.107636       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 19:41:17.124180       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 19:41:17.124214       1 policy_source.go:240] refreshing policies
	I1017 19:41:17.126778       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 19:41:17.317593       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 19:41:17.347870       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 19:41:17.371268       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 19:41:17.379468       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 19:41:17.388037       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 19:41:17.429566       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.164.82"}
	I1017 19:41:17.440122       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.237.19"}
	I1017 19:41:17.971259       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:41:20.879781       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 19:41:20.977487       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:41:21.029311       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d86dd76d8b3bd2505d622c4f7afdac7241ad790540b4197dfa7a873877fdd920] <==
	I1017 19:41:20.408749       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1017 19:41:20.411016       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 19:41:20.413152       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1017 19:41:20.415423       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 19:41:20.417715       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 19:41:20.425441       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 19:41:20.425471       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 19:41:20.425507       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 19:41:20.425584       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:41:20.425606       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 19:41:20.425618       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 19:41:20.425713       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 19:41:20.425834       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 19:41:20.425926       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 19:41:20.428391       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 19:41:20.431393       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 19:41:20.431720       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 19:41:20.431790       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 19:41:20.431824       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 19:41:20.431833       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 19:41:20.431843       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 19:41:20.432893       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:41:20.455073       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 19:41:20.461517       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 19:41:20.465070       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [8604f98158605205b8f1f8315ebc37171cf7eca33ac7f8dff67117b30bbd6b4d] <==
	I1017 19:41:18.316406       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:41:18.377636       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:41:18.477951       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:41:18.478010       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1017 19:41:18.478100       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:41:18.497598       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:41:18.497645       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:41:18.502892       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:41:18.503273       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:41:18.503303       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:41:18.504407       1 config.go:200] "Starting service config controller"
	I1017 19:41:18.504438       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:41:18.504440       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:41:18.504451       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:41:18.504417       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:41:18.504490       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:41:18.504517       1 config.go:309] "Starting node config controller"
	I1017 19:41:18.504527       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:41:18.504539       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:41:18.604635       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:41:18.604635       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:41:18.604635       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3c4af638c6379e21034b2badcf605ec633afc47f689a92da70fdcdf1faa4d286] <==
	I1017 19:41:16.307605       1 serving.go:386] Generated self-signed cert in-memory
	W1017 19:41:17.034883       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 19:41:17.034927       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1017 19:41:17.034942       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 19:41:17.034953       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 19:41:17.063375       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 19:41:17.063409       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:41:17.066398       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:41:17.066446       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:41:17.066824       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 19:41:17.066968       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 19:41:17.167063       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:41:21 no-preload-171807 kubelet[709]: I1017 19:41:21.060944     709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6972q\" (UniqueName: \"kubernetes.io/projected/34b498e2-7851-4f05-b246-fe3c7cebbaf9-kube-api-access-6972q\") pod \"dashboard-metrics-scraper-6ffb444bf9-fqmgm\" (UID: \"34b498e2-7851-4f05-b246-fe3c7cebbaf9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fqmgm"
	Oct 17 19:41:21 no-preload-171807 kubelet[709]: I1017 19:41:21.643467     709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 17 19:41:23 no-preload-171807 kubelet[709]: I1017 19:41:23.960985     709 scope.go:117] "RemoveContainer" containerID="77fa54ba8c102c85dac4e91877674342372da44cf21b0460f3bb3acacc6202b2"
	Oct 17 19:41:24 no-preload-171807 kubelet[709]: I1017 19:41:24.965565     709 scope.go:117] "RemoveContainer" containerID="77fa54ba8c102c85dac4e91877674342372da44cf21b0460f3bb3acacc6202b2"
	Oct 17 19:41:24 no-preload-171807 kubelet[709]: I1017 19:41:24.965794     709 scope.go:117] "RemoveContainer" containerID="e4f1663501bd0ce9b6a40d5275c3f5abd2e7c8066a89e0b36bda675dda265af6"
	Oct 17 19:41:24 no-preload-171807 kubelet[709]: E1017 19:41:24.966001     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fqmgm_kubernetes-dashboard(34b498e2-7851-4f05-b246-fe3c7cebbaf9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fqmgm" podUID="34b498e2-7851-4f05-b246-fe3c7cebbaf9"
	Oct 17 19:41:25 no-preload-171807 kubelet[709]: I1017 19:41:25.970627     709 scope.go:117] "RemoveContainer" containerID="e4f1663501bd0ce9b6a40d5275c3f5abd2e7c8066a89e0b36bda675dda265af6"
	Oct 17 19:41:25 no-preload-171807 kubelet[709]: E1017 19:41:25.970845     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fqmgm_kubernetes-dashboard(34b498e2-7851-4f05-b246-fe3c7cebbaf9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fqmgm" podUID="34b498e2-7851-4f05-b246-fe3c7cebbaf9"
	Oct 17 19:41:31 no-preload-171807 kubelet[709]: I1017 19:41:31.105352     709 scope.go:117] "RemoveContainer" containerID="e4f1663501bd0ce9b6a40d5275c3f5abd2e7c8066a89e0b36bda675dda265af6"
	Oct 17 19:41:31 no-preload-171807 kubelet[709]: E1017 19:41:31.105593     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fqmgm_kubernetes-dashboard(34b498e2-7851-4f05-b246-fe3c7cebbaf9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fqmgm" podUID="34b498e2-7851-4f05-b246-fe3c7cebbaf9"
	Oct 17 19:41:34 no-preload-171807 kubelet[709]: I1017 19:41:34.517290     709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4kqlp" podStartSLOduration=8.946824263 podStartE2EDuration="14.517268125s" podCreationTimestamp="2025-10-17 19:41:20 +0000 UTC" firstStartedPulling="2025-10-17 19:41:21.275437289 +0000 UTC m=+6.452303591" lastFinishedPulling="2025-10-17 19:41:26.845881165 +0000 UTC m=+12.022747453" observedRunningTime="2025-10-17 19:41:26.985665623 +0000 UTC m=+12.162531930" watchObservedRunningTime="2025-10-17 19:41:34.517268125 +0000 UTC m=+19.694134433"
	Oct 17 19:41:44 no-preload-171807 kubelet[709]: I1017 19:41:44.914495     709 scope.go:117] "RemoveContainer" containerID="e4f1663501bd0ce9b6a40d5275c3f5abd2e7c8066a89e0b36bda675dda265af6"
	Oct 17 19:41:45 no-preload-171807 kubelet[709]: I1017 19:41:45.022814     709 scope.go:117] "RemoveContainer" containerID="e4f1663501bd0ce9b6a40d5275c3f5abd2e7c8066a89e0b36bda675dda265af6"
	Oct 17 19:41:45 no-preload-171807 kubelet[709]: I1017 19:41:45.023064     709 scope.go:117] "RemoveContainer" containerID="b00a978ba6c2ba2beea9f7bc631934a305976c12436f5a13772cbbabda6c49c3"
	Oct 17 19:41:45 no-preload-171807 kubelet[709]: E1017 19:41:45.023273     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fqmgm_kubernetes-dashboard(34b498e2-7851-4f05-b246-fe3c7cebbaf9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fqmgm" podUID="34b498e2-7851-4f05-b246-fe3c7cebbaf9"
	Oct 17 19:41:49 no-preload-171807 kubelet[709]: I1017 19:41:49.037192     709 scope.go:117] "RemoveContainer" containerID="a2184126b0f26d397ffbbb79f922291dad5e971092ca6caa2f3d7d4cb54166c9"
	Oct 17 19:41:51 no-preload-171807 kubelet[709]: I1017 19:41:51.106349     709 scope.go:117] "RemoveContainer" containerID="b00a978ba6c2ba2beea9f7bc631934a305976c12436f5a13772cbbabda6c49c3"
	Oct 17 19:41:51 no-preload-171807 kubelet[709]: E1017 19:41:51.106586     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fqmgm_kubernetes-dashboard(34b498e2-7851-4f05-b246-fe3c7cebbaf9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fqmgm" podUID="34b498e2-7851-4f05-b246-fe3c7cebbaf9"
	Oct 17 19:42:03 no-preload-171807 kubelet[709]: I1017 19:42:03.913589     709 scope.go:117] "RemoveContainer" containerID="b00a978ba6c2ba2beea9f7bc631934a305976c12436f5a13772cbbabda6c49c3"
	Oct 17 19:42:03 no-preload-171807 kubelet[709]: E1017 19:42:03.913826     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fqmgm_kubernetes-dashboard(34b498e2-7851-4f05-b246-fe3c7cebbaf9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fqmgm" podUID="34b498e2-7851-4f05-b246-fe3c7cebbaf9"
	Oct 17 19:42:05 no-preload-171807 kubelet[709]: I1017 19:42:05.686153     709 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 17 19:42:05 no-preload-171807 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 19:42:05 no-preload-171807 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 19:42:05 no-preload-171807 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 17 19:42:05 no-preload-171807 systemd[1]: kubelet.service: Consumed 1.697s CPU time.
	
	
	==> kubernetes-dashboard [e35ca6f1c73b7d72497bda5266b591c7c57a2476a6ec5fa6c61165d1cdde7cad] <==
	2025/10/17 19:41:26 Starting overwatch
	2025/10/17 19:41:26 Using namespace: kubernetes-dashboard
	2025/10/17 19:41:26 Using in-cluster config to connect to apiserver
	2025/10/17 19:41:26 Using secret token for csrf signing
	2025/10/17 19:41:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 19:41:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 19:41:26 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 19:41:26 Generating JWE encryption key
	2025/10/17 19:41:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 19:41:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 19:41:27 Initializing JWE encryption key from synchronized object
	2025/10/17 19:41:27 Creating in-cluster Sidecar client
	2025/10/17 19:41:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 19:41:27 Serving insecurely on HTTP port: 9090
	2025/10/17 19:41:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [a2184126b0f26d397ffbbb79f922291dad5e971092ca6caa2f3d7d4cb54166c9] <==
	I1017 19:41:18.281213       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 19:41:48.283837       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c38fc3e3e5753ebdf5ea7669f5ca235a915e6ca85e02b4d3a1dd0a1412bfb0b3] <==
	I1017 19:41:49.393339       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 19:41:49.401318       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 19:41:49.401366       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 19:41:49.423343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:41:52.878431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:41:57.139296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:00.738518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:03.793348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:06.816143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:06.821548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 19:42:06.821805       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 19:42:06.821990       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-171807_b4b7f7e0-b06a-4c76-b258-b44189a5885e!
	I1017 19:42:06.822010       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e74a4cee-e08d-4268-aaf1-9d923d1555d4", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-171807_b4b7f7e0-b06a-4c76-b258-b44189a5885e became leader
	W1017 19:42:06.824623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:06.828238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 19:42:06.922252       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-171807_b4b7f7e0-b06a-4c76-b258-b44189a5885e!
	W1017 19:42:08.832554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:08.837170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-171807 -n no-preload-171807
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-171807 -n no-preload-171807: exit status 2 (356.28144ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-171807 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-171807
helpers_test.go:243: (dbg) docker inspect no-preload-171807:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6738402fa93e143430ae2d5b8e2230a70ebaadd4b5f882988414cd70bfdd23a5",
	        "Created": "2025-10-17T19:39:49.424559642Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 737031,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:41:08.552977888Z",
	            "FinishedAt": "2025-10-17T19:41:07.200630261Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/6738402fa93e143430ae2d5b8e2230a70ebaadd4b5f882988414cd70bfdd23a5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6738402fa93e143430ae2d5b8e2230a70ebaadd4b5f882988414cd70bfdd23a5/hostname",
	        "HostsPath": "/var/lib/docker/containers/6738402fa93e143430ae2d5b8e2230a70ebaadd4b5f882988414cd70bfdd23a5/hosts",
	        "LogPath": "/var/lib/docker/containers/6738402fa93e143430ae2d5b8e2230a70ebaadd4b5f882988414cd70bfdd23a5/6738402fa93e143430ae2d5b8e2230a70ebaadd4b5f882988414cd70bfdd23a5-json.log",
	        "Name": "/no-preload-171807",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-171807:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-171807",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6738402fa93e143430ae2d5b8e2230a70ebaadd4b5f882988414cd70bfdd23a5",
	                "LowerDir": "/var/lib/docker/overlay2/2465273c560fa18f3af90b746f46f6002d9f83f3da22434fa2cf4768a02a24de-init/diff:/var/lib/docker/overlay2/dbfb6a42e05d15debefb7c829b0dbabbe558b70da40f1ab4f30d27e0dda96088/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2465273c560fa18f3af90b746f46f6002d9f83f3da22434fa2cf4768a02a24de/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2465273c560fa18f3af90b746f46f6002d9f83f3da22434fa2cf4768a02a24de/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2465273c560fa18f3af90b746f46f6002d9f83f3da22434fa2cf4768a02a24de/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-171807",
	                "Source": "/var/lib/docker/volumes/no-preload-171807/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-171807",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-171807",
	                "name.minikube.sigs.k8s.io": "no-preload-171807",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "64f62ea835f43cf3044abc0f0847d4ed2b8981195777d845bb804a8fc1a98665",
	            "SandboxKey": "/var/run/docker/netns/64f62ea835f4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-171807": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:35:2e:52:01:4b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4d20f1cdd8a9ad4b75566b03de0ba176c437b8596d360733d4786d1a9071e68d",
	                    "EndpointID": "cf4b48de40fe4efc66e471ceeb9ffe9d78c77e169f71f2c651ec88b58a8bc4e1",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-171807",
	                        "6738402fa93e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-171807 -n no-preload-171807
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-171807 -n no-preload-171807: exit status 2 (333.180183ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-171807 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-171807 logs -n 25: (1.257042632s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p pause-022753                                                                                                                                                                                                                               │ pause-022753                 │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:39 UTC │
	│ start   │ -p no-preload-171807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:39 UTC │ 17 Oct 25 19:40 UTC │
	│ start   │ -p cert-expiration-141205 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-141205       │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ delete  │ -p cert-expiration-141205                                                                                                                                                                                                                     │ cert-expiration-141205       │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-907112 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │                     │
	│ start   │ -p embed-certs-599709 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:41 UTC │
	│ stop    │ -p old-k8s-version-907112 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-907112 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:40 UTC │
	│ start   │ -p old-k8s-version-907112 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable metrics-server -p no-preload-171807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │                     │
	│ stop    │ -p no-preload-171807 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p no-preload-171807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-171807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-599709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	│ stop    │ -p embed-certs-599709 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p embed-certs-599709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ start   │ -p embed-certs-599709 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	│ image   │ old-k8s-version-907112 image list --format=json                                                                                                                                                                                               │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ pause   │ -p old-k8s-version-907112 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	│ delete  │ -p old-k8s-version-907112                                                                                                                                                                                                                     │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-907112                                                                                                                                                                                                                     │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ delete  │ -p disable-driver-mounts-220565                                                                                                                                                                                                               │ disable-driver-mounts-220565 │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ start   │ -p default-k8s-diff-port-112878 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	│ image   │ no-preload-171807 image list --format=json                                                                                                                                                                                                    │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ pause   │ -p no-preload-171807 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:41:43
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:41:43.616967  745903 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:41:43.617358  745903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:41:43.617370  745903 out.go:374] Setting ErrFile to fd 2...
	I1017 19:41:43.617376  745903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:41:43.617742  745903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:41:43.618276  745903 out.go:368] Setting JSON to false
	I1017 19:41:43.620011  745903 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12243,"bootTime":1760717861,"procs":354,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:41:43.620210  745903 start.go:141] virtualization: kvm guest
	I1017 19:41:43.622362  745903 out.go:179] * [default-k8s-diff-port-112878] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:41:43.624054  745903 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:41:43.624039  745903 notify.go:220] Checking for updates...
	I1017 19:41:43.626637  745903 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:41:43.628483  745903 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:41:43.629882  745903 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 19:41:43.631360  745903 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:41:43.632653  745903 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:41:43.634991  745903 config.go:182] Loaded profile config "embed-certs-599709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:41:43.635125  745903 config.go:182] Loaded profile config "kubernetes-upgrade-137244": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:41:43.635241  745903 config.go:182] Loaded profile config "no-preload-171807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:41:43.635365  745903 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:41:43.666650  745903 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:41:43.666770  745903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:41:43.742138  745903 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 19:41:43.728916302 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:41:43.742286  745903 docker.go:318] overlay module found
	I1017 19:41:43.745204  745903 out.go:179] * Using the docker driver based on user configuration
	I1017 19:41:38.938295  741107 addons.go:514] duration metric: took 2.412388436s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1017 19:41:39.424761  741107 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1017 19:41:39.430465  741107 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 19:41:39.430498  741107 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 19:41:39.924838  741107 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1017 19:41:39.929414  741107 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1017 19:41:39.930574  741107 api_server.go:141] control plane version: v1.34.1
	I1017 19:41:39.930605  741107 api_server.go:131] duration metric: took 1.006244433s to wait for apiserver health ...
	I1017 19:41:39.930616  741107 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:41:39.934755  741107 system_pods.go:59] 8 kube-system pods found
	I1017 19:41:39.934805  741107 system_pods.go:61] "coredns-66bc5c9577-v8hls" [a5c14de3-5736-4bb4-b7d4-7eee1aade5e2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:41:39.934817  741107 system_pods.go:61] "etcd-embed-certs-599709" [bb79f8c8-ab08-444c-9a40-a5350363cc1e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:41:39.934825  741107 system_pods.go:61] "kindnet-sj7sj" [7e5aa5b6-57e8-4ad9-9b23-53eeffd10715] Running
	I1017 19:41:39.934834  741107 system_pods.go:61] "kube-apiserver-embed-certs-599709" [a32d29b8-0363-444d-9b3c-7783f55fa404] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:41:39.934845  741107 system_pods.go:61] "kube-controller-manager-embed-certs-599709" [e88ac6d0-9e7a-4fcd-ac5a-b39168c76bcf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:41:39.934861  741107 system_pods.go:61] "kube-proxy-l2pwz" [1ea9dbf3-19b4-4b54-95c1-df8fa679f2bb] Running
	I1017 19:41:39.934870  741107 system_pods.go:61] "kube-scheduler-embed-certs-599709" [6d1db335-0b58-4714-b27a-502897391843] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:41:39.934875  741107 system_pods.go:61] "storage-provisioner" [2d8a3a4d-3738-4d33-98fd-b99622f860ec] Running
	I1017 19:41:39.934886  741107 system_pods.go:74] duration metric: took 4.262687ms to wait for pod list to return data ...
	I1017 19:41:39.934898  741107 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:41:39.938025  741107 default_sa.go:45] found service account: "default"
	I1017 19:41:39.938051  741107 default_sa.go:55] duration metric: took 3.145055ms for default service account to be created ...
	I1017 19:41:39.938061  741107 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 19:41:39.941165  741107 system_pods.go:86] 8 kube-system pods found
	I1017 19:41:39.941193  741107 system_pods.go:89] "coredns-66bc5c9577-v8hls" [a5c14de3-5736-4bb4-b7d4-7eee1aade5e2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:41:39.941202  741107 system_pods.go:89] "etcd-embed-certs-599709" [bb79f8c8-ab08-444c-9a40-a5350363cc1e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:41:39.941209  741107 system_pods.go:89] "kindnet-sj7sj" [7e5aa5b6-57e8-4ad9-9b23-53eeffd10715] Running
	I1017 19:41:39.941223  741107 system_pods.go:89] "kube-apiserver-embed-certs-599709" [a32d29b8-0363-444d-9b3c-7783f55fa404] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:41:39.941235  741107 system_pods.go:89] "kube-controller-manager-embed-certs-599709" [e88ac6d0-9e7a-4fcd-ac5a-b39168c76bcf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:41:39.941243  741107 system_pods.go:89] "kube-proxy-l2pwz" [1ea9dbf3-19b4-4b54-95c1-df8fa679f2bb] Running
	I1017 19:41:39.941254  741107 system_pods.go:89] "kube-scheduler-embed-certs-599709" [6d1db335-0b58-4714-b27a-502897391843] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:41:39.941261  741107 system_pods.go:89] "storage-provisioner" [2d8a3a4d-3738-4d33-98fd-b99622f860ec] Running
	I1017 19:41:39.941270  741107 system_pods.go:126] duration metric: took 3.202094ms to wait for k8s-apps to be running ...
	I1017 19:41:39.941280  741107 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:41:39.941337  741107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:41:39.956251  741107 system_svc.go:56] duration metric: took 14.959792ms WaitForService to wait for kubelet
	I1017 19:41:39.956280  741107 kubeadm.go:586] duration metric: took 3.430460959s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:41:39.956299  741107 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:41:39.959465  741107 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 19:41:39.959498  741107 node_conditions.go:123] node cpu capacity is 8
	I1017 19:41:39.959516  741107 node_conditions.go:105] duration metric: took 3.212196ms to run NodePressure ...
	I1017 19:41:39.959532  741107 start.go:241] waiting for startup goroutines ...
	I1017 19:41:39.959545  741107 start.go:246] waiting for cluster config update ...
	I1017 19:41:39.959562  741107 start.go:255] writing updated cluster config ...
	I1017 19:41:39.959870  741107 ssh_runner.go:195] Run: rm -f paused
	I1017 19:41:39.964034  741107 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:41:39.968215  741107 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-v8hls" in "kube-system" namespace to be "Ready" or be gone ...
	W1017 19:41:41.974818  741107 pod_ready.go:104] pod "coredns-66bc5c9577-v8hls" is not "Ready", error: <nil>
	I1017 19:41:43.747723  745903 start.go:305] selected driver: docker
	I1017 19:41:43.747745  745903 start.go:925] validating driver "docker" against <nil>
	I1017 19:41:43.747765  745903 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:41:43.748599  745903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:41:43.834246  745903 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 19:41:43.820003123 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:41:43.834457  745903 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 19:41:43.834794  745903 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:41:43.836542  745903 out.go:179] * Using Docker driver with root privileges
	I1017 19:41:43.837654  745903 cni.go:84] Creating CNI manager for ""
	I1017 19:41:43.837752  745903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:41:43.837766  745903 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 19:41:43.837870  745903 start.go:349] cluster config:
	{Name:default-k8s-diff-port-112878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-112878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:41:43.839819  745903 out.go:179] * Starting "default-k8s-diff-port-112878" primary control-plane node in "default-k8s-diff-port-112878" cluster
	I1017 19:41:43.841766  745903 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:41:43.843336  745903 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:41:43.844530  745903 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:41:43.844589  745903 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:41:43.844606  745903 cache.go:58] Caching tarball of preloaded images
	I1017 19:41:43.844597  745903 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:41:43.844760  745903 preload.go:233] Found /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:41:43.844775  745903 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:41:43.844935  745903 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/config.json ...
	I1017 19:41:43.844961  745903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/config.json: {Name:mk460bf2d77dd3a84f681f1f712690b68fc42abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:41:43.875006  745903 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:41:43.875056  745903 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:41:43.875078  745903 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:41:43.875117  745903 start.go:360] acquireMachinesLock for default-k8s-diff-port-112878: {Name:mke65bf3d91761a71e610a747337c18b9c7b5f17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:41:43.875239  745903 start.go:364] duration metric: took 96.42µs to acquireMachinesLock for "default-k8s-diff-port-112878"
	I1017 19:41:43.875268  745903 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-112878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-112878 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:41:43.875375  745903 start.go:125] createHost starting for "" (driver="docker")
	I1017 19:41:43.762832  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:41:43.763278  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:41:43.763335  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:41:43.763389  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:41:43.811132  696997 cri.go:89] found id: "5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:43.811153  696997 cri.go:89] found id: ""
	I1017 19:41:43.811164  696997 logs.go:282] 1 containers: [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690]
	I1017 19:41:43.811234  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:43.817567  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:41:43.817676  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:41:43.856013  696997 cri.go:89] found id: ""
	I1017 19:41:43.856042  696997 logs.go:282] 0 containers: []
	W1017 19:41:43.856054  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:41:43.856062  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:41:43.856124  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:41:43.894920  696997 cri.go:89] found id: ""
	I1017 19:41:43.894946  696997 logs.go:282] 0 containers: []
	W1017 19:41:43.894957  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:41:43.894965  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:41:43.895031  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:41:43.932368  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:43.932392  696997 cri.go:89] found id: ""
	I1017 19:41:43.932403  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:41:43.932461  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:43.938431  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:41:43.938507  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:41:43.979712  696997 cri.go:89] found id: ""
	I1017 19:41:43.979742  696997 logs.go:282] 0 containers: []
	W1017 19:41:43.979752  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:41:43.979760  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:41:43.979832  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:41:44.020552  696997 cri.go:89] found id: "97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:41:44.020581  696997 cri.go:89] found id: "ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:44.020587  696997 cri.go:89] found id: ""
	I1017 19:41:44.020598  696997 logs.go:282] 2 containers: [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770]
	I1017 19:41:44.020665  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:44.025980  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:44.031254  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:41:44.031330  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:41:44.069342  696997 cri.go:89] found id: ""
	I1017 19:41:44.069373  696997 logs.go:282] 0 containers: []
	W1017 19:41:44.069387  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:41:44.069435  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:41:44.069509  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:41:44.106630  696997 cri.go:89] found id: ""
	I1017 19:41:44.106663  696997 logs.go:282] 0 containers: []
	W1017 19:41:44.106675  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:41:44.106720  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:41:44.106742  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:41:44.182299  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:41:44.182353  696997 logs.go:123] Gathering logs for kube-controller-manager [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe] ...
	I1017 19:41:44.182377  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:41:44.219169  696997 logs.go:123] Gathering logs for kube-controller-manager [ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770] ...
	I1017 19:41:44.219201  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:44.258283  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:41:44.258324  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:41:44.342259  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:41:44.342312  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:41:44.387622  696997 logs.go:123] Gathering logs for kube-apiserver [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690] ...
	I1017 19:41:44.387663  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:44.436656  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:41:44.436709  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:44.520151  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:41:44.520199  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:41:44.656916  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:41:44.656962  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:41:47.184769  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:41:47.185282  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:41:47.185353  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:41:47.185420  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:41:47.223769  696997 cri.go:89] found id: "5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:47.223797  696997 cri.go:89] found id: ""
	I1017 19:41:47.223807  696997 logs.go:282] 1 containers: [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690]
	I1017 19:41:47.223867  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:47.229382  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:41:47.229458  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:41:47.266903  696997 cri.go:89] found id: ""
	I1017 19:41:47.266933  696997 logs.go:282] 0 containers: []
	W1017 19:41:47.266944  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:41:47.266952  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:41:47.267018  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:41:47.307547  696997 cri.go:89] found id: ""
	I1017 19:41:47.307580  696997 logs.go:282] 0 containers: []
	W1017 19:41:47.307593  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:41:47.307602  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:41:47.307666  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:41:47.345899  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:47.345938  696997 cri.go:89] found id: ""
	I1017 19:41:47.345950  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:41:47.346017  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:47.351849  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:41:47.351926  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:41:47.392001  696997 cri.go:89] found id: ""
	I1017 19:41:47.392035  696997 logs.go:282] 0 containers: []
	W1017 19:41:47.392048  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:41:47.392056  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:41:47.392122  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	W1017 19:41:45.089814  736846 pod_ready.go:104] pod "coredns-66bc5c9577-gnx5k" is not "Ready", error: <nil>
	W1017 19:41:47.589488  736846 pod_ready.go:104] pod "coredns-66bc5c9577-gnx5k" is not "Ready", error: <nil>
	I1017 19:41:43.879206  745903 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 19:41:43.879517  745903 start.go:159] libmachine.API.Create for "default-k8s-diff-port-112878" (driver="docker")
	I1017 19:41:43.879567  745903 client.go:168] LocalClient.Create starting
	I1017 19:41:43.879704  745903 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem
	I1017 19:41:43.879749  745903 main.go:141] libmachine: Decoding PEM data...
	I1017 19:41:43.879767  745903 main.go:141] libmachine: Parsing certificate...
	I1017 19:41:43.879835  745903 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem
	I1017 19:41:43.879856  745903 main.go:141] libmachine: Decoding PEM data...
	I1017 19:41:43.879870  745903 main.go:141] libmachine: Parsing certificate...
	I1017 19:41:43.880388  745903 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-112878 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 19:41:43.905749  745903 cli_runner.go:211] docker network inspect default-k8s-diff-port-112878 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 19:41:43.905868  745903 network_create.go:284] running [docker network inspect default-k8s-diff-port-112878] to gather additional debugging logs...
	I1017 19:41:43.905890  745903 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-112878
	W1017 19:41:43.930692  745903 cli_runner.go:211] docker network inspect default-k8s-diff-port-112878 returned with exit code 1
	I1017 19:41:43.930807  745903 network_create.go:287] error running [docker network inspect default-k8s-diff-port-112878]: docker network inspect default-k8s-diff-port-112878: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-112878 not found
	I1017 19:41:43.930847  745903 network_create.go:289] output of [docker network inspect default-k8s-diff-port-112878]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-112878 not found
	
	** /stderr **
	I1017 19:41:43.930993  745903 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:41:43.955597  745903 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-730d915fa684 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:e2:02:cd:78:1c:78} reservation:<nil>}
	I1017 19:41:43.956743  745903 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c0eb20920271 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:96:b3:b5:eb:1f:90} reservation:<nil>}
	I1017 19:41:43.957396  745903 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b9c5a6663579 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:42:f3:20:fa:08:4c} reservation:<nil>}
	I1017 19:41:43.958158  745903 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ff724deaa8b6 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:de:b9:ef:51:e1:16} reservation:<nil>}
	I1017 19:41:43.959251  745903 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001edf1c0}
	I1017 19:41:43.959278  745903 network_create.go:124] attempt to create docker network default-k8s-diff-port-112878 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1017 19:41:43.959337  745903 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-112878 default-k8s-diff-port-112878
	I1017 19:41:44.039643  745903 network_create.go:108] docker network default-k8s-diff-port-112878 192.168.85.0/24 created
	I1017 19:41:44.039698  745903 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-112878" container
	I1017 19:41:44.039799  745903 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 19:41:44.063696  745903 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-112878 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-112878 --label created_by.minikube.sigs.k8s.io=true
	I1017 19:41:44.088602  745903 oci.go:103] Successfully created a docker volume default-k8s-diff-port-112878
	I1017 19:41:44.088703  745903 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-112878-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-112878 --entrypoint /usr/bin/test -v default-k8s-diff-port-112878:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 19:41:44.686628  745903 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-112878
	I1017 19:41:44.686677  745903 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:41:44.686734  745903 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 19:41:44.686842  745903 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-112878:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1017 19:41:43.978630  741107 pod_ready.go:104] pod "coredns-66bc5c9577-v8hls" is not "Ready", error: <nil>
	W1017 19:41:46.475028  741107 pod_ready.go:104] pod "coredns-66bc5c9577-v8hls" is not "Ready", error: <nil>
	I1017 19:41:47.425699  696997 cri.go:89] found id: "97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:41:47.425726  696997 cri.go:89] found id: "ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:47.425731  696997 cri.go:89] found id: ""
	I1017 19:41:47.425740  696997 logs.go:282] 2 containers: [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770]
	I1017 19:41:47.425808  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:47.430323  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:47.435004  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:41:47.435082  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:41:47.466554  696997 cri.go:89] found id: ""
	I1017 19:41:47.466590  696997 logs.go:282] 0 containers: []
	W1017 19:41:47.466601  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:41:47.466609  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:41:47.466667  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:41:47.500006  696997 cri.go:89] found id: ""
	I1017 19:41:47.500088  696997 logs.go:282] 0 containers: []
	W1017 19:41:47.500110  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:41:47.500135  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:41:47.500155  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:41:47.619242  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:41:47.619283  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:41:47.645496  696997 logs.go:123] Gathering logs for kube-apiserver [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690] ...
	I1017 19:41:47.645538  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:47.691843  696997 logs.go:123] Gathering logs for kube-controller-manager [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe] ...
	I1017 19:41:47.691887  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:41:47.732742  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:41:47.732778  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:41:47.816041  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:41:47.816089  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:41:47.859924  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:41:47.859956  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:41:47.940213  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:41:47.940236  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:41:47.940253  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:48.016195  696997 logs.go:123] Gathering logs for kube-controller-manager [ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770] ...
	I1017 19:41:48.016245  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:50.557764  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:41:50.558213  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:41:50.558272  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:41:50.558325  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:41:50.590784  696997 cri.go:89] found id: "5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:50.590810  696997 cri.go:89] found id: ""
	I1017 19:41:50.590820  696997 logs.go:282] 1 containers: [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690]
	I1017 19:41:50.590881  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:50.595382  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:41:50.595466  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:41:50.627480  696997 cri.go:89] found id: ""
	I1017 19:41:50.627515  696997 logs.go:282] 0 containers: []
	W1017 19:41:50.627526  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:41:50.627534  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:41:50.627613  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:41:50.657135  696997 cri.go:89] found id: ""
	I1017 19:41:50.657171  696997 logs.go:282] 0 containers: []
	W1017 19:41:50.657183  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:41:50.657190  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:41:50.657243  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:41:50.686194  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:50.686216  696997 cri.go:89] found id: ""
	I1017 19:41:50.686224  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:41:50.686275  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:50.690940  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:41:50.691002  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:41:50.719897  696997 cri.go:89] found id: ""
	I1017 19:41:50.719931  696997 logs.go:282] 0 containers: []
	W1017 19:41:50.719945  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:41:50.719953  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:41:50.720021  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:41:50.750535  696997 cri.go:89] found id: "97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:41:50.750558  696997 cri.go:89] found id: "ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:50.750562  696997 cri.go:89] found id: ""
	I1017 19:41:50.750570  696997 logs.go:282] 2 containers: [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770]
	I1017 19:41:50.750619  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:50.755126  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:50.759801  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:41:50.759882  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:41:50.788964  696997 cri.go:89] found id: ""
	I1017 19:41:50.788991  696997 logs.go:282] 0 containers: []
	W1017 19:41:50.788999  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:41:50.789006  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:41:50.789067  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:41:50.818768  696997 cri.go:89] found id: ""
	I1017 19:41:50.818796  696997 logs.go:282] 0 containers: []
	W1017 19:41:50.818808  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:41:50.818843  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:41:50.818862  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:41:50.836186  696997 logs.go:123] Gathering logs for kube-apiserver [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690] ...
	I1017 19:41:50.836219  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:50.871170  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:41:50.871204  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:50.926523  696997 logs.go:123] Gathering logs for kube-controller-manager [ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770] ...
	I1017 19:41:50.926566  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:50.958628  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:41:50.958660  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:41:51.018884  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:41:51.018922  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:41:51.055435  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:41:51.055475  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:41:51.182896  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:41:51.182938  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:41:51.251894  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:41:51.251915  696997 logs.go:123] Gathering logs for kube-controller-manager [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe] ...
	I1017 19:41:51.251929  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	W1017 19:41:49.731782  736846 pod_ready.go:104] pod "coredns-66bc5c9577-gnx5k" is not "Ready", error: <nil>
	I1017 19:41:52.090000  736846 pod_ready.go:94] pod "coredns-66bc5c9577-gnx5k" is "Ready"
	I1017 19:41:52.090038  736846 pod_ready.go:86] duration metric: took 33.506840151s for pod "coredns-66bc5c9577-gnx5k" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:41:52.093480  736846 pod_ready.go:83] waiting for pod "etcd-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:41:52.098812  736846 pod_ready.go:94] pod "etcd-no-preload-171807" is "Ready"
	I1017 19:41:52.098846  736846 pod_ready.go:86] duration metric: took 5.338264ms for pod "etcd-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:41:52.101309  736846 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:41:52.106387  736846 pod_ready.go:94] pod "kube-apiserver-no-preload-171807" is "Ready"
	I1017 19:41:52.106419  736846 pod_ready.go:86] duration metric: took 5.082393ms for pod "kube-apiserver-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:41:52.108913  736846 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:41:52.287392  736846 pod_ready.go:94] pod "kube-controller-manager-no-preload-171807" is "Ready"
	I1017 19:41:52.287421  736846 pod_ready.go:86] duration metric: took 178.480253ms for pod "kube-controller-manager-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:41:52.487984  736846 pod_ready.go:83] waiting for pod "kube-proxy-cdbjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:41:52.887204  736846 pod_ready.go:94] pod "kube-proxy-cdbjg" is "Ready"
	I1017 19:41:52.887238  736846 pod_ready.go:86] duration metric: took 399.228226ms for pod "kube-proxy-cdbjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:41:53.087631  736846 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:41:53.487223  736846 pod_ready.go:94] pod "kube-scheduler-no-preload-171807" is "Ready"
	I1017 19:41:53.487258  736846 pod_ready.go:86] duration metric: took 399.594972ms for pod "kube-scheduler-no-preload-171807" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:41:53.487275  736846 pod_ready.go:40] duration metric: took 34.908550348s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:41:53.538588  736846 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 19:41:53.540718  736846 out.go:179] * Done! kubectl is now configured to use "no-preload-171807" cluster and "default" namespace by default
	I1017 19:41:51.085768  745903 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-112878:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (6.398874828s)
	I1017 19:41:51.085802  745903 kic.go:203] duration metric: took 6.39906432s to extract preloaded images to volume ...
	W1017 19:41:51.085917  745903 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1017 19:41:51.085964  745903 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1017 19:41:51.086010  745903 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 19:41:51.154221  745903 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-112878 --name default-k8s-diff-port-112878 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-112878 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-112878 --network default-k8s-diff-port-112878 --ip 192.168.85.2 --volume default-k8s-diff-port-112878:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 19:41:51.476374  745903 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-112878 --format={{.State.Running}}
	I1017 19:41:51.495224  745903 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-112878 --format={{.State.Status}}
	I1017 19:41:51.516085  745903 cli_runner.go:164] Run: docker exec default-k8s-diff-port-112878 stat /var/lib/dpkg/alternatives/iptables
	I1017 19:41:51.565342  745903 oci.go:144] the created container "default-k8s-diff-port-112878" has a running status.
	I1017 19:41:51.565382  745903 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21753-492109/.minikube/machines/default-k8s-diff-port-112878/id_rsa...
	I1017 19:41:51.995880  745903 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21753-492109/.minikube/machines/default-k8s-diff-port-112878/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 19:41:52.027720  745903 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-112878 --format={{.State.Status}}
	I1017 19:41:52.047600  745903 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 19:41:52.047629  745903 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-112878 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 19:41:52.096841  745903 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-112878 --format={{.State.Status}}
	I1017 19:41:52.117434  745903 machine.go:93] provisionDockerMachine start ...
	I1017 19:41:52.117526  745903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-112878
	I1017 19:41:52.136510  745903 main.go:141] libmachine: Using SSH client type: native
	I1017 19:41:52.136815  745903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1017 19:41:52.136833  745903 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:41:52.277302  745903 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-112878
	
	I1017 19:41:52.277339  745903 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-112878"
	I1017 19:41:52.277449  745903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-112878
	I1017 19:41:52.296491  745903 main.go:141] libmachine: Using SSH client type: native
	I1017 19:41:52.296784  745903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1017 19:41:52.296802  745903 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-112878 && echo "default-k8s-diff-port-112878" | sudo tee /etc/hostname
	I1017 19:41:52.443089  745903 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-112878
	
	I1017 19:41:52.443176  745903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-112878
	I1017 19:41:52.461477  745903 main.go:141] libmachine: Using SSH client type: native
	I1017 19:41:52.461743  745903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1017 19:41:52.461767  745903 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-112878' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-112878/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-112878' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:41:52.602406  745903 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:41:52.602442  745903 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-492109/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-492109/.minikube}
	I1017 19:41:52.602474  745903 ubuntu.go:190] setting up certificates
	I1017 19:41:52.602491  745903 provision.go:84] configureAuth start
	I1017 19:41:52.602561  745903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-112878
	I1017 19:41:52.620813  745903 provision.go:143] copyHostCerts
	I1017 19:41:52.620894  745903 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem, removing ...
	I1017 19:41:52.620911  745903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem
	I1017 19:41:52.621003  745903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem (1123 bytes)
	I1017 19:41:52.621180  745903 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem, removing ...
	I1017 19:41:52.621197  745903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem
	I1017 19:41:52.621243  745903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem (1679 bytes)
	I1017 19:41:52.621333  745903 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem, removing ...
	I1017 19:41:52.621346  745903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem
	I1017 19:41:52.621380  745903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem (1078 bytes)
	I1017 19:41:52.621453  745903 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-112878 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-112878 localhost minikube]
	I1017 19:41:52.851727  745903 provision.go:177] copyRemoteCerts
	I1017 19:41:52.851800  745903 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:41:52.851850  745903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-112878
	I1017 19:41:52.874134  745903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/default-k8s-diff-port-112878/id_rsa Username:docker}
	I1017 19:41:52.977877  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 19:41:52.999283  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1017 19:41:53.018922  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:41:53.038326  745903 provision.go:87] duration metric: took 435.813371ms to configureAuth
	I1017 19:41:53.038362  745903 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:41:53.038589  745903 config.go:182] Loaded profile config "default-k8s-diff-port-112878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:41:53.038783  745903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-112878
	I1017 19:41:53.057284  745903 main.go:141] libmachine: Using SSH client type: native
	I1017 19:41:53.057586  745903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1017 19:41:53.057609  745903 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:41:53.319990  745903 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:41:53.320021  745903 machine.go:96] duration metric: took 1.202562107s to provisionDockerMachine
	I1017 19:41:53.320034  745903 client.go:171] duration metric: took 9.440459566s to LocalClient.Create
	I1017 19:41:53.320053  745903 start.go:167] duration metric: took 9.440540224s to libmachine.API.Create "default-k8s-diff-port-112878"
	I1017 19:41:53.320061  745903 start.go:293] postStartSetup for "default-k8s-diff-port-112878" (driver="docker")
	I1017 19:41:53.320071  745903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:41:53.320133  745903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:41:53.320188  745903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-112878
	I1017 19:41:53.338483  745903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/default-k8s-diff-port-112878/id_rsa Username:docker}
	I1017 19:41:53.439730  745903 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:41:53.443610  745903 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:41:53.443641  745903 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:41:53.443657  745903 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/addons for local assets ...
	I1017 19:41:53.443741  745903 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/files for local assets ...
	I1017 19:41:53.443855  745903 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem -> 4957252.pem in /etc/ssl/certs
	I1017 19:41:53.443985  745903 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:41:53.452577  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem --> /etc/ssl/certs/4957252.pem (1708 bytes)
	I1017 19:41:53.476283  745903 start.go:296] duration metric: took 156.204633ms for postStartSetup
	I1017 19:41:53.476716  745903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-112878
	I1017 19:41:53.497175  745903 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/config.json ...
	I1017 19:41:53.497496  745903 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:41:53.497548  745903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-112878
	I1017 19:41:53.517985  745903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/default-k8s-diff-port-112878/id_rsa Username:docker}
	W1017 19:41:49.175662  741107 pod_ready.go:104] pod "coredns-66bc5c9577-v8hls" is not "Ready", error: <nil>
	W1017 19:41:51.474827  741107 pod_ready.go:104] pod "coredns-66bc5c9577-v8hls" is not "Ready", error: <nil>
	I1017 19:41:53.618954  745903 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:41:53.624582  745903 start.go:128] duration metric: took 9.74918919s to createHost
	I1017 19:41:53.624613  745903 start.go:83] releasing machines lock for "default-k8s-diff-port-112878", held for 9.749362486s
	I1017 19:41:53.624700  745903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-112878
	I1017 19:41:53.644954  745903 ssh_runner.go:195] Run: cat /version.json
	I1017 19:41:53.645035  745903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-112878
	I1017 19:41:53.645049  745903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:41:53.645150  745903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-112878
	I1017 19:41:53.666211  745903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/default-k8s-diff-port-112878/id_rsa Username:docker}
	I1017 19:41:53.666376  745903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/default-k8s-diff-port-112878/id_rsa Username:docker}
	I1017 19:41:53.761670  745903 ssh_runner.go:195] Run: systemctl --version
	I1017 19:41:53.841749  745903 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:41:53.889911  745903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:41:53.895715  745903 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:41:53.895780  745903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:41:53.927529  745903 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1017 19:41:53.927559  745903 start.go:495] detecting cgroup driver to use...
	I1017 19:41:53.927599  745903 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 19:41:53.927656  745903 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:41:53.948218  745903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:41:53.964126  745903 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:41:53.964195  745903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:41:53.986091  745903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:41:54.008375  745903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:41:54.110900  745903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:41:54.213981  745903 docker.go:234] disabling docker service ...
	I1017 19:41:54.214056  745903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:41:54.236247  745903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:41:54.250718  745903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:41:54.348852  745903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:41:54.442317  745903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:41:54.456550  745903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:41:54.473516  745903 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:41:54.473584  745903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:54.485020  745903 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 19:41:54.485084  745903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:54.495139  745903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:54.504821  745903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:54.515116  745903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:41:54.524701  745903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:54.534442  745903 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:54.549528  745903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:41:54.559315  745903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:41:54.567747  745903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:41:54.576150  745903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:41:54.660865  745903 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:41:54.775480  745903 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:41:54.775545  745903 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:41:54.779760  745903 start.go:563] Will wait 60s for crictl version
	I1017 19:41:54.779828  745903 ssh_runner.go:195] Run: which crictl
	I1017 19:41:54.783936  745903 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:41:54.810648  745903 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:41:54.810758  745903 ssh_runner.go:195] Run: crio --version
	I1017 19:41:54.842382  745903 ssh_runner.go:195] Run: crio --version
	I1017 19:41:54.875317  745903 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:41:53.782830  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:41:53.783314  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:41:53.783390  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:41:53.783463  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:41:53.815737  696997 cri.go:89] found id: "5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:53.815765  696997 cri.go:89] found id: ""
	I1017 19:41:53.815774  696997 logs.go:282] 1 containers: [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690]
	I1017 19:41:53.815837  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:53.820816  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:41:53.820891  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:41:53.853452  696997 cri.go:89] found id: ""
	I1017 19:41:53.853485  696997 logs.go:282] 0 containers: []
	W1017 19:41:53.853498  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:41:53.853506  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:41:53.853585  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:41:53.887481  696997 cri.go:89] found id: ""
	I1017 19:41:53.887516  696997 logs.go:282] 0 containers: []
	W1017 19:41:53.887528  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:41:53.887536  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:41:53.887620  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:41:53.922787  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:53.922816  696997 cri.go:89] found id: ""
	I1017 19:41:53.922826  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:41:53.922887  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:53.927864  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:41:53.927932  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:41:53.961456  696997 cri.go:89] found id: ""
	I1017 19:41:53.961486  696997 logs.go:282] 0 containers: []
	W1017 19:41:53.961497  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:41:53.961505  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:41:53.961571  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:41:53.995706  696997 cri.go:89] found id: "97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:41:53.995735  696997 cri.go:89] found id: "ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	I1017 19:41:53.995741  696997 cri.go:89] found id: ""
	I1017 19:41:53.995753  696997 logs.go:282] 2 containers: [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770]
	I1017 19:41:53.995825  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:54.000608  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:54.005044  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:41:54.005111  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:41:54.042906  696997 cri.go:89] found id: ""
	I1017 19:41:54.042941  696997 logs.go:282] 0 containers: []
	W1017 19:41:54.042953  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:41:54.042961  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:41:54.043023  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:41:54.080357  696997 cri.go:89] found id: ""
	I1017 19:41:54.080385  696997 logs.go:282] 0 containers: []
	W1017 19:41:54.080397  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:41:54.080419  696997 logs.go:123] Gathering logs for kube-apiserver [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690] ...
	I1017 19:41:54.080435  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:54.118697  696997 logs.go:123] Gathering logs for kube-controller-manager [ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770] ...
	I1017 19:41:54.118728  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	W1017 19:41:54.147807  696997 logs.go:130] failed kube-controller-manager [ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:41:54.145094    6966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770\": container with ID starting with ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770 not found: ID does not exist" containerID="ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	time="2025-10-17T19:41:54Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770\": container with ID starting with ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1017 19:41:54.145094    6966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770\": container with ID starting with ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770 not found: ID does not exist" containerID="ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770"
	time="2025-10-17T19:41:54Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770\": container with ID starting with ed93611cb423df2ad1506a36209bb4147cff4d144c1650607245d10228212770 not found: ID does not exist"
	
	** /stderr **
	I1017 19:41:54.147835  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:41:54.147851  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:41:54.215314  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:41:54.215372  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:41:54.233067  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:41:54.233105  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:41:54.305009  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:41:54.305030  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:41:54.305045  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:54.363612  696997 logs.go:123] Gathering logs for kube-controller-manager [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe] ...
	I1017 19:41:54.363650  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:41:54.402367  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:41:54.402396  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:41:54.436144  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:41:54.436173  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:41:57.034175  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:41:57.034671  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:41:57.034768  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:41:57.034836  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:41:57.065022  696997 cri.go:89] found id: "5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:57.065048  696997 cri.go:89] found id: ""
	I1017 19:41:57.065059  696997 logs.go:282] 1 containers: [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690]
	I1017 19:41:57.065122  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:57.069618  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:41:57.069719  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:41:57.098020  696997 cri.go:89] found id: ""
	I1017 19:41:57.098045  696997 logs.go:282] 0 containers: []
	W1017 19:41:57.098053  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:41:57.098060  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:41:57.098122  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:41:57.127769  696997 cri.go:89] found id: ""
	I1017 19:41:57.127793  696997 logs.go:282] 0 containers: []
	W1017 19:41:57.127801  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:41:57.127808  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:41:57.127957  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:41:57.159935  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:57.159960  696997 cri.go:89] found id: ""
	I1017 19:41:57.159971  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:41:57.160033  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:57.164577  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:41:57.164652  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:41:57.195419  696997 cri.go:89] found id: ""
	I1017 19:41:57.195448  696997 logs.go:282] 0 containers: []
	W1017 19:41:57.195460  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:41:57.195476  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:41:57.195545  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:41:57.225635  696997 cri.go:89] found id: "97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:41:57.225655  696997 cri.go:89] found id: ""
	I1017 19:41:57.225663  696997 logs.go:282] 1 containers: [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe]
	I1017 19:41:57.225744  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:41:57.230083  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:41:57.230152  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:41:57.259600  696997 cri.go:89] found id: ""
	I1017 19:41:57.259625  696997 logs.go:282] 0 containers: []
	W1017 19:41:57.259632  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:41:57.259641  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:41:57.259732  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:41:57.291664  696997 cri.go:89] found id: ""
	I1017 19:41:57.291705  696997 logs.go:282] 0 containers: []
	W1017 19:41:57.291719  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:41:57.291732  696997 logs.go:123] Gathering logs for kube-apiserver [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690] ...
	I1017 19:41:57.291755  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:41:57.326995  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:41:57.327027  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:41:57.383885  696997 logs.go:123] Gathering logs for kube-controller-manager [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe] ...
	I1017 19:41:57.383926  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:41:54.876655  745903 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-112878 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:41:54.896020  745903 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1017 19:41:54.900719  745903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:41:54.912420  745903 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-112878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-112878 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:41:54.912551  745903 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:41:54.912619  745903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:41:54.951205  745903 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:41:54.951230  745903 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:41:54.951292  745903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:41:54.982389  745903 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:41:54.982415  745903 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:41:54.982423  745903 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1017 19:41:54.982507  745903 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-112878 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-112878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:41:54.982569  745903 ssh_runner.go:195] Run: crio config
	I1017 19:41:55.030938  745903 cni.go:84] Creating CNI manager for ""
	I1017 19:41:55.030967  745903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:41:55.030987  745903 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:41:55.031011  745903 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-112878 NodeName:default-k8s-diff-port-112878 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:41:55.031131  745903 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-112878"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:41:55.031210  745903 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:41:55.040219  745903 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:41:55.040286  745903 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 19:41:55.048810  745903 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1017 19:41:55.062892  745903 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:41:55.079756  745903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1017 19:41:55.094593  745903 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1017 19:41:55.098876  745903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:41:55.109971  745903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:41:55.196482  745903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:41:55.226525  745903 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878 for IP: 192.168.85.2
	I1017 19:41:55.226551  745903 certs.go:195] generating shared ca certs ...
	I1017 19:41:55.226575  745903 certs.go:227] acquiring lock for ca certs: {Name:mkc97483d62151ba5c32d923dd19e3e2b3661468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:41:55.226784  745903 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key
	I1017 19:41:55.226831  745903 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key
	I1017 19:41:55.226842  745903 certs.go:257] generating profile certs ...
	I1017 19:41:55.226900  745903 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/client.key
	I1017 19:41:55.226921  745903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/client.crt with IP's: []
	I1017 19:41:55.371718  745903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/client.crt ...
	I1017 19:41:55.371749  745903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/client.crt: {Name:mkc6056f4159c9badc3cdb573eca9fad46db65c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:41:55.371927  745903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/client.key ...
	I1017 19:41:55.371940  745903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/client.key: {Name:mk28fb14f4859226ed9121c1a2de1ac3628155bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:41:55.372020  745903 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.key.55092fd4
	I1017 19:41:55.372037  745903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.crt.55092fd4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1017 19:41:55.435601  745903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.crt.55092fd4 ...
	I1017 19:41:55.435634  745903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.crt.55092fd4: {Name:mk0f6162d53fec5018596205793f8f650c48ad99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:41:55.435855  745903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.key.55092fd4 ...
	I1017 19:41:55.435876  745903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.key.55092fd4: {Name:mkd7f4773df571e9a40ca5fa7833cc5056f2efda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:41:55.435962  745903 certs.go:382] copying /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.crt.55092fd4 -> /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.crt
	I1017 19:41:55.436039  745903 certs.go:386] copying /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.key.55092fd4 -> /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.key
	I1017 19:41:55.436107  745903 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/proxy-client.key
	I1017 19:41:55.436123  745903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/proxy-client.crt with IP's: []
	I1017 19:41:55.750469  745903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/proxy-client.crt ...
	I1017 19:41:55.750502  745903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/proxy-client.crt: {Name:mk71b1e23b81cc0ebbc0dffc742665d19c9879b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:41:55.750700  745903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/proxy-client.key ...
	I1017 19:41:55.750714  745903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/proxy-client.key: {Name:mk61aa2c336ce37f90e9cb643e557e29ac524333 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:41:55.750907  745903 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725.pem (1338 bytes)
	W1017 19:41:55.750947  745903 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725_empty.pem, impossibly tiny 0 bytes
	I1017 19:41:55.750956  745903 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 19:41:55.750988  745903 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem (1078 bytes)
	I1017 19:41:55.751010  745903 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:41:55.751033  745903 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem (1679 bytes)
	I1017 19:41:55.751079  745903 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem (1708 bytes)
	I1017 19:41:55.751671  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:41:55.773443  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:41:55.793202  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:41:55.812967  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 19:41:55.832311  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 19:41:55.851598  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 19:41:55.871256  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:41:55.890758  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/default-k8s-diff-port-112878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:41:55.910393  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725.pem --> /usr/share/ca-certificates/495725.pem (1338 bytes)
	I1017 19:41:55.933242  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem --> /usr/share/ca-certificates/4957252.pem (1708 bytes)
	I1017 19:41:55.954814  745903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:41:55.974799  745903 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:41:55.988801  745903 ssh_runner.go:195] Run: openssl version
	I1017 19:41:55.995437  745903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/495725.pem && ln -fs /usr/share/ca-certificates/495725.pem /etc/ssl/certs/495725.pem"
	I1017 19:41:56.005322  745903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/495725.pem
	I1017 19:41:56.009540  745903 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/495725.pem
	I1017 19:41:56.009604  745903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/495725.pem
	I1017 19:41:56.044653  745903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/495725.pem /etc/ssl/certs/51391683.0"
	I1017 19:41:56.054453  745903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4957252.pem && ln -fs /usr/share/ca-certificates/4957252.pem /etc/ssl/certs/4957252.pem"
	I1017 19:41:56.064567  745903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4957252.pem
	I1017 19:41:56.068931  745903 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/4957252.pem
	I1017 19:41:56.068984  745903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4957252.pem
	I1017 19:41:56.105262  745903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4957252.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:41:56.115165  745903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:41:56.124644  745903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:41:56.128961  745903 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:41:56.129036  745903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:41:56.164429  745903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:41:56.174208  745903 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:41:56.178436  745903 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 19:41:56.178496  745903 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-112878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-112878 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:41:56.178587  745903 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:41:56.178657  745903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:41:56.208824  745903 cri.go:89] found id: ""
	I1017 19:41:56.208892  745903 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 19:41:56.217790  745903 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 19:41:56.226793  745903 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 19:41:56.226866  745903 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 19:41:56.235617  745903 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 19:41:56.235640  745903 kubeadm.go:157] found existing configuration files:
	
	I1017 19:41:56.235703  745903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1017 19:41:56.244591  745903 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 19:41:56.244645  745903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 19:41:56.253278  745903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1017 19:41:56.262226  745903 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 19:41:56.262315  745903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 19:41:56.270623  745903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1017 19:41:56.279505  745903 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 19:41:56.279569  745903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 19:41:56.287938  745903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1017 19:41:56.296709  745903 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 19:41:56.296788  745903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 19:41:56.305288  745903 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 19:41:56.371823  745903 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1017 19:41:56.434893  745903 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1017 19:41:53.979823  741107 pod_ready.go:104] pod "coredns-66bc5c9577-v8hls" is not "Ready", error: <nil>
	W1017 19:41:56.473888  741107 pod_ready.go:104] pod "coredns-66bc5c9577-v8hls" is not "Ready", error: <nil>
	W1017 19:41:58.474498  741107 pod_ready.go:104] pod "coredns-66bc5c9577-v8hls" is not "Ready", error: <nil>
	I1017 19:41:57.416632  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:41:57.416670  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:41:57.487169  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:41:57.487222  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:41:57.520668  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:41:57.520717  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:41:57.619586  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:41:57.619629  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:41:57.637960  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:41:57.638000  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:41:57.700226  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:42:00.200846  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:42:00.201381  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:42:00.201441  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:42:00.201491  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:42:00.233549  696997 cri.go:89] found id: "5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:42:00.233570  696997 cri.go:89] found id: ""
	I1017 19:42:00.233578  696997 logs.go:282] 1 containers: [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690]
	I1017 19:42:00.233637  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:42:00.238003  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:42:00.238084  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:42:00.269145  696997 cri.go:89] found id: ""
	I1017 19:42:00.269181  696997 logs.go:282] 0 containers: []
	W1017 19:42:00.269192  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:42:00.269203  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:42:00.269260  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:42:00.300539  696997 cri.go:89] found id: ""
	I1017 19:42:00.300571  696997 logs.go:282] 0 containers: []
	W1017 19:42:00.300583  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:42:00.300591  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:42:00.300656  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:42:00.332263  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:42:00.332299  696997 cri.go:89] found id: ""
	I1017 19:42:00.332308  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:42:00.332383  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:42:00.336968  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:42:00.337034  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:42:00.368223  696997 cri.go:89] found id: ""
	I1017 19:42:00.368250  696997 logs.go:282] 0 containers: []
	W1017 19:42:00.368262  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:42:00.368270  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:42:00.368339  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:42:00.398184  696997 cri.go:89] found id: "97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:42:00.398210  696997 cri.go:89] found id: ""
	I1017 19:42:00.398220  696997 logs.go:282] 1 containers: [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe]
	I1017 19:42:00.398283  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:42:00.402968  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:42:00.403040  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:42:00.438191  696997 cri.go:89] found id: ""
	I1017 19:42:00.438220  696997 logs.go:282] 0 containers: []
	W1017 19:42:00.438232  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:42:00.438239  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:42:00.438303  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:42:00.469917  696997 cri.go:89] found id: ""
	I1017 19:42:00.469963  696997 logs.go:282] 0 containers: []
	W1017 19:42:00.469975  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:42:00.469987  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:42:00.470002  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:42:00.532636  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:42:00.532675  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:42:00.570627  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:42:00.570659  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:42:00.665335  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:42:00.665379  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:42:00.683771  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:42:00.683814  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:42:00.751728  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:42:00.751755  696997 logs.go:123] Gathering logs for kube-apiserver [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690] ...
	I1017 19:42:00.751770  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:42:00.787331  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:42:00.787374  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:42:00.848156  696997 logs.go:123] Gathering logs for kube-controller-manager [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe] ...
	I1017 19:42:00.848199  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	W1017 19:42:00.974385  741107 pod_ready.go:104] pod "coredns-66bc5c9577-v8hls" is not "Ready", error: <nil>
	W1017 19:42:02.974546  741107 pod_ready.go:104] pod "coredns-66bc5c9577-v8hls" is not "Ready", error: <nil>
	I1017 19:42:03.380762  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:42:03.381177  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:42:03.381229  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:42:03.381282  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:42:03.416064  696997 cri.go:89] found id: "5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:42:03.416089  696997 cri.go:89] found id: ""
	I1017 19:42:03.416101  696997 logs.go:282] 1 containers: [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690]
	I1017 19:42:03.416165  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:42:03.421437  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:42:03.421636  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:42:03.455831  696997 cri.go:89] found id: ""
	I1017 19:42:03.455859  696997 logs.go:282] 0 containers: []
	W1017 19:42:03.455867  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:42:03.455873  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:42:03.455931  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:42:03.491585  696997 cri.go:89] found id: ""
	I1017 19:42:03.491617  696997 logs.go:282] 0 containers: []
	W1017 19:42:03.491721  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:42:03.491730  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:42:03.491892  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:42:03.525823  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:42:03.525850  696997 cri.go:89] found id: ""
	I1017 19:42:03.525862  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:42:03.525935  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:42:03.530424  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:42:03.530499  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:42:03.565584  696997 cri.go:89] found id: ""
	I1017 19:42:03.565612  696997 logs.go:282] 0 containers: []
	W1017 19:42:03.565628  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:42:03.565636  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:42:03.565707  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:42:03.602994  696997 cri.go:89] found id: "97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:42:03.603016  696997 cri.go:89] found id: ""
	I1017 19:42:03.603026  696997 logs.go:282] 1 containers: [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe]
	I1017 19:42:03.603086  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:42:03.608187  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:42:03.608259  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:42:03.643990  696997 cri.go:89] found id: ""
	I1017 19:42:03.644021  696997 logs.go:282] 0 containers: []
	W1017 19:42:03.644043  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:42:03.644052  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:42:03.644141  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:42:03.676998  696997 cri.go:89] found id: ""
	I1017 19:42:03.677026  696997 logs.go:282] 0 containers: []
	W1017 19:42:03.677037  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:42:03.677049  696997 logs.go:123] Gathering logs for kube-apiserver [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690] ...
	I1017 19:42:03.677064  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:42:03.720739  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:42:03.720777  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:42:03.794628  696997 logs.go:123] Gathering logs for kube-controller-manager [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe] ...
	I1017 19:42:03.794672  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:42:03.830773  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:42:03.830801  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:42:03.914552  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:42:03.914594  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:42:03.950023  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:42:03.950063  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:42:04.072786  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:42:04.072823  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:42:04.091894  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:42:04.091939  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:42:04.160835  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:42:06.661775  696997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 19:42:06.662352  696997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 19:42:06.662432  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:42:06.662502  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:42:06.702042  696997 cri.go:89] found id: "5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:42:06.702068  696997 cri.go:89] found id: ""
	I1017 19:42:06.702079  696997 logs.go:282] 1 containers: [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690]
	I1017 19:42:06.702156  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:42:06.707811  696997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:42:06.707894  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:42:06.755533  696997 cri.go:89] found id: ""
	I1017 19:42:06.755650  696997 logs.go:282] 0 containers: []
	W1017 19:42:06.755730  696997 logs.go:284] No container was found matching "etcd"
	I1017 19:42:06.755743  696997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:42:06.755808  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:42:06.787076  696997 cri.go:89] found id: ""
	I1017 19:42:06.787101  696997 logs.go:282] 0 containers: []
	W1017 19:42:06.787111  696997 logs.go:284] No container was found matching "coredns"
	I1017 19:42:06.787118  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:42:06.787180  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:42:06.817208  696997 cri.go:89] found id: "262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:42:06.817235  696997 cri.go:89] found id: ""
	I1017 19:42:06.817246  696997 logs.go:282] 1 containers: [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7]
	I1017 19:42:06.817313  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:42:06.822980  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:42:06.823058  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:42:06.857086  696997 cri.go:89] found id: ""
	I1017 19:42:06.857125  696997 logs.go:282] 0 containers: []
	W1017 19:42:06.857135  696997 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:42:06.857145  696997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:42:06.857210  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:42:06.892760  696997 cri.go:89] found id: "97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:42:06.892784  696997 cri.go:89] found id: ""
	I1017 19:42:06.892793  696997 logs.go:282] 1 containers: [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe]
	I1017 19:42:06.892854  696997 ssh_runner.go:195] Run: which crictl
	I1017 19:42:06.898140  696997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:42:06.898218  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:42:06.930132  696997 cri.go:89] found id: ""
	I1017 19:42:06.930157  696997 logs.go:282] 0 containers: []
	W1017 19:42:06.930167  696997 logs.go:284] No container was found matching "kindnet"
	I1017 19:42:06.930173  696997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 19:42:06.930229  696997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 19:42:06.958873  696997 cri.go:89] found id: ""
	I1017 19:42:06.958900  696997 logs.go:282] 0 containers: []
	W1017 19:42:06.958908  696997 logs.go:284] No container was found matching "storage-provisioner"
	I1017 19:42:06.958919  696997 logs.go:123] Gathering logs for kubelet ...
	I1017 19:42:06.958932  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:42:07.053179  696997 logs.go:123] Gathering logs for dmesg ...
	I1017 19:42:07.053221  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:42:07.072919  696997 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:42:07.072954  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:42:07.135798  696997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:42:07.135826  696997 logs.go:123] Gathering logs for kube-apiserver [5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690] ...
	I1017 19:42:07.135844  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e1473d7b53300c49dddda134bdf51cfac6b0aaac4b3bb50af5eaf841b4ab690"
	I1017 19:42:07.177675  696997 logs.go:123] Gathering logs for kube-scheduler [262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7] ...
	I1017 19:42:07.177730  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 262e1aa22974d448092643b0f060b8cd4bc0869ab085a686dfaf9ab3bced16a7"
	I1017 19:42:07.237968  696997 logs.go:123] Gathering logs for kube-controller-manager [97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe] ...
	I1017 19:42:07.238010  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97d08b8ee7ebf47a3017170d00d2b8ba2e937fe5d262e0c16cd1be4ddaa444fe"
	I1017 19:42:07.267890  696997 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:42:07.267928  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:42:07.334560  696997 logs.go:123] Gathering logs for container status ...
	I1017 19:42:07.334603  696997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:42:07.480427  745903 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 19:42:07.480500  745903 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 19:42:07.480646  745903 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 19:42:07.480769  745903 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1017 19:42:07.480883  745903 kubeadm.go:318] OS: Linux
	I1017 19:42:07.480966  745903 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 19:42:07.481049  745903 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 19:42:07.481135  745903 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 19:42:07.481229  745903 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 19:42:07.481283  745903 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 19:42:07.481343  745903 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 19:42:07.481408  745903 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 19:42:07.481461  745903 kubeadm.go:318] CGROUPS_IO: enabled
	I1017 19:42:07.481559  745903 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 19:42:07.481733  745903 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 19:42:07.481858  745903 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 19:42:07.481933  745903 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 19:42:07.487731  745903 out.go:252]   - Generating certificates and keys ...
	I1017 19:42:07.487831  745903 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 19:42:07.487894  745903 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 19:42:07.487952  745903 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 19:42:07.487998  745903 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 19:42:07.488081  745903 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 19:42:07.488173  745903 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 19:42:07.488252  745903 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 19:42:07.488403  745903 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-112878 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1017 19:42:07.488495  745903 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 19:42:07.488637  745903 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-112878 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1017 19:42:07.488785  745903 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 19:42:07.488883  745903 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 19:42:07.488952  745903 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 19:42:07.489028  745903 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 19:42:07.489105  745903 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 19:42:07.489189  745903 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 19:42:07.489281  745903 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 19:42:07.489417  745903 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 19:42:07.489506  745903 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 19:42:07.489604  745903 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 19:42:07.489727  745903 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 19:42:07.491207  745903 out.go:252]   - Booting up control plane ...
	I1017 19:42:07.491333  745903 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 19:42:07.491450  745903 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 19:42:07.491547  745903 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 19:42:07.491668  745903 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 19:42:07.491809  745903 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 19:42:07.491954  745903 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 19:42:07.492096  745903 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 19:42:07.492186  745903 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 19:42:07.492374  745903 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 19:42:07.492492  745903 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 19:42:07.492559  745903 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001789962s
	I1017 19:42:07.492650  745903 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 19:42:07.492758  745903 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1017 19:42:07.492900  745903 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 19:42:07.493019  745903 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 19:42:07.493146  745903 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.218966764s
	I1017 19:42:07.493240  745903 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.292476669s
	I1017 19:42:07.493334  745903 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001732588s
	I1017 19:42:07.493484  745903 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 19:42:07.493658  745903 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 19:42:07.493741  745903 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 19:42:07.494015  745903 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-112878 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 19:42:07.494097  745903 kubeadm.go:318] [bootstrap-token] Using token: d7re57.2kq1vkaf70o3u8fc
	I1017 19:42:07.496663  745903 out.go:252]   - Configuring RBAC rules ...
	I1017 19:42:07.496822  745903 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 19:42:07.496952  745903 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 19:42:07.497127  745903 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 19:42:07.497293  745903 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 19:42:07.497434  745903 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 19:42:07.497529  745903 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 19:42:07.497636  745903 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 19:42:07.497673  745903 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 19:42:07.497738  745903 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 19:42:07.497745  745903 kubeadm.go:318] 
	I1017 19:42:07.497800  745903 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 19:42:07.497805  745903 kubeadm.go:318] 
	I1017 19:42:07.497877  745903 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 19:42:07.497886  745903 kubeadm.go:318] 
	I1017 19:42:07.497915  745903 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 19:42:07.497963  745903 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 19:42:07.498008  745903 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 19:42:07.498014  745903 kubeadm.go:318] 
	I1017 19:42:07.498074  745903 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 19:42:07.498091  745903 kubeadm.go:318] 
	I1017 19:42:07.498142  745903 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 19:42:07.498149  745903 kubeadm.go:318] 
	I1017 19:42:07.498217  745903 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 19:42:07.498319  745903 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 19:42:07.498387  745903 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 19:42:07.498396  745903 kubeadm.go:318] 
	I1017 19:42:07.498521  745903 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 19:42:07.498639  745903 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 19:42:07.498652  745903 kubeadm.go:318] 
	I1017 19:42:07.498768  745903 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token d7re57.2kq1vkaf70o3u8fc \
	I1017 19:42:07.498911  745903 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ae4b222593b9932ac318f80ad834fe09d4c8ed481133016b5c410bf2757b648e \
	I1017 19:42:07.498947  745903 kubeadm.go:318] 	--control-plane 
	I1017 19:42:07.498954  745903 kubeadm.go:318] 
	I1017 19:42:07.499077  745903 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 19:42:07.499089  745903 kubeadm.go:318] 
	I1017 19:42:07.499192  745903 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token d7re57.2kq1vkaf70o3u8fc \
	I1017 19:42:07.499349  745903 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ae4b222593b9932ac318f80ad834fe09d4c8ed481133016b5c410bf2757b648e 
	I1017 19:42:07.499370  745903 cni.go:84] Creating CNI manager for ""
	I1017 19:42:07.499389  745903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:42:07.500922  745903 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 19:42:07.502190  745903 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 19:42:07.507142  745903 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 19:42:07.507176  745903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 19:42:07.521881  745903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 19:42:07.755871  745903 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 19:42:07.755939  745903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:42:07.755972  745903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-112878 minikube.k8s.io/updated_at=2025_10_17T19_42_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d minikube.k8s.io/name=default-k8s-diff-port-112878 minikube.k8s.io/primary=true
	I1017 19:42:07.770253  745903 ops.go:34] apiserver oom_adj: -16
	I1017 19:42:07.860892  745903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:42:08.361714  745903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1017 19:42:04.974856  741107 pod_ready.go:104] pod "coredns-66bc5c9577-v8hls" is not "Ready", error: <nil>
	W1017 19:42:07.475568  741107 pod_ready.go:104] pod "coredns-66bc5c9577-v8hls" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 17 19:41:28 no-preload-171807 crio[562]: time="2025-10-17T19:41:28.753630167Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:41:28 no-preload-171807 crio[562]: time="2025-10-17T19:41:28.757469095Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:41:28 no-preload-171807 crio[562]: time="2025-10-17T19:41:28.757604416Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:41:44 no-preload-171807 crio[562]: time="2025-10-17T19:41:44.915216435Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6a123bd3-3d4a-4829-b3ea-4facadcb8e5b name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:41:44 no-preload-171807 crio[562]: time="2025-10-17T19:41:44.916250457Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f4157fbf-cae7-4b56-82f5-f4ba7407713e name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:41:44 no-preload-171807 crio[562]: time="2025-10-17T19:41:44.917551761Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fqmgm/dashboard-metrics-scraper" id=69060cb6-aa49-47c5-9790-8344f08a7e8f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:41:44 no-preload-171807 crio[562]: time="2025-10-17T19:41:44.918045413Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:44 no-preload-171807 crio[562]: time="2025-10-17T19:41:44.92633991Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:44 no-preload-171807 crio[562]: time="2025-10-17T19:41:44.927069665Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:44 no-preload-171807 crio[562]: time="2025-10-17T19:41:44.961091931Z" level=info msg="Created container b00a978ba6c2ba2beea9f7bc631934a305976c12436f5a13772cbbabda6c49c3: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fqmgm/dashboard-metrics-scraper" id=69060cb6-aa49-47c5-9790-8344f08a7e8f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:41:44 no-preload-171807 crio[562]: time="2025-10-17T19:41:44.961973346Z" level=info msg="Starting container: b00a978ba6c2ba2beea9f7bc631934a305976c12436f5a13772cbbabda6c49c3" id=d9ddb5b3-9ab7-49bc-9e30-328bfec29f47 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:41:44 no-preload-171807 crio[562]: time="2025-10-17T19:41:44.96436027Z" level=info msg="Started container" PID=1736 containerID=b00a978ba6c2ba2beea9f7bc631934a305976c12436f5a13772cbbabda6c49c3 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fqmgm/dashboard-metrics-scraper id=d9ddb5b3-9ab7-49bc-9e30-328bfec29f47 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0b02da81e7c2c9638d359df7ded04ecb871bc8ba874aba03f04dfc084e0d1351
	Oct 17 19:41:45 no-preload-171807 crio[562]: time="2025-10-17T19:41:45.024277135Z" level=info msg="Removing container: e4f1663501bd0ce9b6a40d5275c3f5abd2e7c8066a89e0b36bda675dda265af6" id=ad2349d0-812e-4a15-9dfd-dfd9b363f4c6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:41:45 no-preload-171807 crio[562]: time="2025-10-17T19:41:45.037792491Z" level=info msg="Removed container e4f1663501bd0ce9b6a40d5275c3f5abd2e7c8066a89e0b36bda675dda265af6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fqmgm/dashboard-metrics-scraper" id=ad2349d0-812e-4a15-9dfd-dfd9b363f4c6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:41:49 no-preload-171807 crio[562]: time="2025-10-17T19:41:49.037597627Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=682d3c90-1fdb-4ef1-b0b1-f6b535c801a8 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:41:49 no-preload-171807 crio[562]: time="2025-10-17T19:41:49.047258701Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f0fbbc59-41c9-4b95-903f-ac4d37e32730 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:41:49 no-preload-171807 crio[562]: time="2025-10-17T19:41:49.048612317Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=3bf160af-457b-42f9-8d04-086483f9a2f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:41:49 no-preload-171807 crio[562]: time="2025-10-17T19:41:49.048953943Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:49 no-preload-171807 crio[562]: time="2025-10-17T19:41:49.19697977Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:49 no-preload-171807 crio[562]: time="2025-10-17T19:41:49.197238277Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/fe78c967bdeef54f05cabed306b6c2e83ed2d4b57c4db159f9a4d6f801a2fe5e/merged/etc/passwd: no such file or directory"
	Oct 17 19:41:49 no-preload-171807 crio[562]: time="2025-10-17T19:41:49.197286118Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fe78c967bdeef54f05cabed306b6c2e83ed2d4b57c4db159f9a4d6f801a2fe5e/merged/etc/group: no such file or directory"
	Oct 17 19:41:49 no-preload-171807 crio[562]: time="2025-10-17T19:41:49.19762322Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:41:49 no-preload-171807 crio[562]: time="2025-10-17T19:41:49.376262586Z" level=info msg="Created container c38fc3e3e5753ebdf5ea7669f5ca235a915e6ca85e02b4d3a1dd0a1412bfb0b3: kube-system/storage-provisioner/storage-provisioner" id=3bf160af-457b-42f9-8d04-086483f9a2f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:41:49 no-preload-171807 crio[562]: time="2025-10-17T19:41:49.377162361Z" level=info msg="Starting container: c38fc3e3e5753ebdf5ea7669f5ca235a915e6ca85e02b4d3a1dd0a1412bfb0b3" id=65be344d-3062-446e-84b7-69c701db9621 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:41:49 no-preload-171807 crio[562]: time="2025-10-17T19:41:49.379658913Z" level=info msg="Started container" PID=1753 containerID=c38fc3e3e5753ebdf5ea7669f5ca235a915e6ca85e02b4d3a1dd0a1412bfb0b3 description=kube-system/storage-provisioner/storage-provisioner id=65be344d-3062-446e-84b7-69c701db9621 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8f38c050ba6d27fcc5fa6891101af37d594cff3163b1d7c83d289fb378b7d590
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c38fc3e3e5753       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   8f38c050ba6d2       storage-provisioner                          kube-system
	b00a978ba6c2b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago      Exited              dashboard-metrics-scraper   2                   0b02da81e7c2c       dashboard-metrics-scraper-6ffb444bf9-fqmgm   kubernetes-dashboard
	e35ca6f1c73b7       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   24042a14154aa       kubernetes-dashboard-855c9754f9-4kqlp        kubernetes-dashboard
	e92d1fe44275c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   7ed947a361ffd       busybox                                      default
	835887455a526       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   32357fe29053d       coredns-66bc5c9577-gnx5k                     kube-system
	a2184126b0f26       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   8f38c050ba6d2       storage-provisioner                          kube-system
	d022a76c654d2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   8c239914db371       kindnet-tk5hv                                kube-system
	8604f98158605       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   0e7c7331e3d42       kube-proxy-cdbjg                             kube-system
	d86dd76d8b3bd       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   218302f0d68d3       kube-controller-manager-no-preload-171807    kube-system
	2c72f7d2bb251       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   055045014b278       kube-apiserver-no-preload-171807             kube-system
	2e00090e4a67b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   d6e37d3017f66       etcd-no-preload-171807                       kube-system
	3c4af638c6379       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   5db8b2a72a08c       kube-scheduler-no-preload-171807             kube-system
	
	
	==> coredns [835887455a526598d2d867876cd5a46611eab57d28140e1ba67e9ee8f72601e5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59993 - 44391 "HINFO IN 1845336670314001142.1568016722406365941. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.489644275s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-171807
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-171807
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=no-preload-171807
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_40_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:40:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-171807
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:41:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:41:58 +0000   Fri, 17 Oct 2025 19:40:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:41:58 +0000   Fri, 17 Oct 2025 19:40:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:41:58 +0000   Fri, 17 Oct 2025 19:40:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:41:58 +0000   Fri, 17 Oct 2025 19:40:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-171807
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                4a402992-3a00-457b-a9c9-3f38efedf1af
	  Boot ID:                    c8616e78-d085-41cd-a329-f2bcfd9cfa15
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-gnx5k                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-no-preload-171807                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-tk5hv                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-no-preload-171807              250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-no-preload-171807     200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-cdbjg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-no-preload-171807              100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fqmgm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4kqlp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node no-preload-171807 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node no-preload-171807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node no-preload-171807 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s               node-controller  Node no-preload-171807 event: Registered Node no-preload-171807 in Controller
	  Normal  NodeReady                93s                kubelet          Node no-preload-171807 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 56s)  kubelet          Node no-preload-171807 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 56s)  kubelet          Node no-preload-171807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 56s)  kubelet          Node no-preload-171807 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node no-preload-171807 event: Registered Node no-preload-171807 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 d1 49 91 03 c2 08 06
	[  +0.000804] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 a9 2b 44 da ae 08 06
	[Oct17 18:59] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022229] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023876] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.024898] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +2.047801] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +4.031525] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:00] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +16.382262] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +32.252567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	
	
	==> etcd [2e00090e4a67b40ac53e71a16e43401493b444c9846af2e602339d93281be030] <==
	{"level":"warn","ts":"2025-10-17T19:41:16.351123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.358584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.366972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.374511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.403842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.411190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.418510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.425461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.432449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.447278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.453933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.466042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.474537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.481857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.495598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.504086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.511805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:41:16.558700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59624","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T19:41:48.241119Z","caller":"traceutil/trace.go:172","msg":"trace[931953741] transaction","detail":"{read_only:false; response_revision:614; number_of_response:1; }","duration":"148.453777ms","start":"2025-10-17T19:41:48.092610Z","end":"2025-10-17T19:41:48.241064Z","steps":["trace[931953741] 'process raft request'  (duration: 148.288402ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:41:49.702071Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.283746ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-gnx5k\" limit:1 ","response":"range_response_count:1 size:5935"}
	{"level":"warn","ts":"2025-10-17T19:41:49.702150Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.224133ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/storage-provisioner.186f5ead6cc90046\" limit:1 ","response":"range_response_count:1 size:764"}
	{"level":"info","ts":"2025-10-17T19:41:49.702178Z","caller":"traceutil/trace.go:172","msg":"trace[47940972] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-gnx5k; range_end:; response_count:1; response_revision:618; }","duration":"117.386244ms","start":"2025-10-17T19:41:49.584777Z","end":"2025-10-17T19:41:49.702163Z","steps":["trace[47940972] 'range keys from in-memory index tree'  (duration: 117.145457ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:41:49.702198Z","caller":"traceutil/trace.go:172","msg":"trace[595834552] range","detail":"{range_begin:/registry/events/kube-system/storage-provisioner.186f5ead6cc90046; range_end:; response_count:1; response_revision:618; }","duration":"118.297058ms","start":"2025-10-17T19:41:49.583888Z","end":"2025-10-17T19:41:49.702185Z","steps":["trace[595834552] 'range keys from in-memory index tree'  (duration: 118.072243ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:41:49.702053Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.802737ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T19:41:49.702332Z","caller":"traceutil/trace.go:172","msg":"trace[987343703] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:618; }","duration":"185.102736ms","start":"2025-10-17T19:41:49.517210Z","end":"2025-10-17T19:41:49.702313Z","steps":["trace[987343703] 'range keys from in-memory index tree'  (duration: 184.728558ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:42:10 up  3:24,  0 user,  load average: 4.01, 3.38, 2.15
	Linux no-preload-171807 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d022a76c654d2e18ebf220443cc9aab41bb02d48d7f4800b39daf43d8ce2eea1] <==
	I1017 19:41:18.536193       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 19:41:18.536512       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1017 19:41:18.536761       1 main.go:148] setting mtu 1500 for CNI 
	I1017 19:41:18.536782       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 19:41:18.536820       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T19:41:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 19:41:18.736169       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 19:41:18.736237       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 19:41:18.736250       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:41:18.737730       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 19:41:19.137081       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:41:19.137112       1 metrics.go:72] Registering metrics
	I1017 19:41:19.137193       1 controller.go:711] "Syncing nftables rules"
	I1017 19:41:28.736058       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 19:41:28.736105       1 main.go:301] handling current node
	I1017 19:41:38.739036       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 19:41:38.739096       1 main.go:301] handling current node
	I1017 19:41:48.736949       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 19:41:48.737008       1 main.go:301] handling current node
	I1017 19:41:58.736931       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 19:41:58.736986       1 main.go:301] handling current node
	I1017 19:42:08.744794       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 19:42:08.744833       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2c72f7d2bb251ff207976219245143bbd296d8b6a6495c2e5556d0e9da8f1099] <==
	I1017 19:41:17.068510       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 19:41:17.068650       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1017 19:41:17.069175       1 aggregator.go:171] initial CRD sync complete...
	I1017 19:41:17.069186       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 19:41:17.069193       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 19:41:17.069199       1 cache.go:39] Caches are synced for autoregister controller
	I1017 19:41:17.068514       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 19:41:17.068603       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1017 19:41:17.074993       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 19:41:17.077209       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 19:41:17.107636       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 19:41:17.124180       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 19:41:17.124214       1 policy_source.go:240] refreshing policies
	I1017 19:41:17.126778       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 19:41:17.317593       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 19:41:17.347870       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 19:41:17.371268       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 19:41:17.379468       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 19:41:17.388037       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 19:41:17.429566       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.164.82"}
	I1017 19:41:17.440122       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.237.19"}
	I1017 19:41:17.971259       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:41:20.879781       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 19:41:20.977487       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:41:21.029311       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d86dd76d8b3bd2505d622c4f7afdac7241ad790540b4197dfa7a873877fdd920] <==
	I1017 19:41:20.408749       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1017 19:41:20.411016       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 19:41:20.413152       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1017 19:41:20.415423       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 19:41:20.417715       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 19:41:20.425441       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 19:41:20.425471       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 19:41:20.425507       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 19:41:20.425584       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:41:20.425606       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 19:41:20.425618       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 19:41:20.425713       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 19:41:20.425834       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 19:41:20.425926       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 19:41:20.428391       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 19:41:20.431393       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 19:41:20.431720       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 19:41:20.431790       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 19:41:20.431824       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 19:41:20.431833       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 19:41:20.431843       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 19:41:20.432893       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:41:20.455073       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 19:41:20.461517       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 19:41:20.465070       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [8604f98158605205b8f1f8315ebc37171cf7eca33ac7f8dff67117b30bbd6b4d] <==
	I1017 19:41:18.316406       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:41:18.377636       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:41:18.477951       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:41:18.478010       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1017 19:41:18.478100       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:41:18.497598       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:41:18.497645       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:41:18.502892       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:41:18.503273       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:41:18.503303       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:41:18.504407       1 config.go:200] "Starting service config controller"
	I1017 19:41:18.504438       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:41:18.504440       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:41:18.504451       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:41:18.504417       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:41:18.504490       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:41:18.504517       1 config.go:309] "Starting node config controller"
	I1017 19:41:18.504527       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:41:18.504539       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:41:18.604635       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:41:18.604635       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:41:18.604635       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3c4af638c6379e21034b2badcf605ec633afc47f689a92da70fdcdf1faa4d286] <==
	I1017 19:41:16.307605       1 serving.go:386] Generated self-signed cert in-memory
	W1017 19:41:17.034883       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 19:41:17.034927       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1017 19:41:17.034942       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 19:41:17.034953       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 19:41:17.063375       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 19:41:17.063409       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:41:17.066398       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:41:17.066446       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:41:17.066824       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 19:41:17.066968       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 19:41:17.167063       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:41:21 no-preload-171807 kubelet[709]: I1017 19:41:21.060944     709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6972q\" (UniqueName: \"kubernetes.io/projected/34b498e2-7851-4f05-b246-fe3c7cebbaf9-kube-api-access-6972q\") pod \"dashboard-metrics-scraper-6ffb444bf9-fqmgm\" (UID: \"34b498e2-7851-4f05-b246-fe3c7cebbaf9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fqmgm"
	Oct 17 19:41:21 no-preload-171807 kubelet[709]: I1017 19:41:21.643467     709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 17 19:41:23 no-preload-171807 kubelet[709]: I1017 19:41:23.960985     709 scope.go:117] "RemoveContainer" containerID="77fa54ba8c102c85dac4e91877674342372da44cf21b0460f3bb3acacc6202b2"
	Oct 17 19:41:24 no-preload-171807 kubelet[709]: I1017 19:41:24.965565     709 scope.go:117] "RemoveContainer" containerID="77fa54ba8c102c85dac4e91877674342372da44cf21b0460f3bb3acacc6202b2"
	Oct 17 19:41:24 no-preload-171807 kubelet[709]: I1017 19:41:24.965794     709 scope.go:117] "RemoveContainer" containerID="e4f1663501bd0ce9b6a40d5275c3f5abd2e7c8066a89e0b36bda675dda265af6"
	Oct 17 19:41:24 no-preload-171807 kubelet[709]: E1017 19:41:24.966001     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fqmgm_kubernetes-dashboard(34b498e2-7851-4f05-b246-fe3c7cebbaf9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fqmgm" podUID="34b498e2-7851-4f05-b246-fe3c7cebbaf9"
	Oct 17 19:41:25 no-preload-171807 kubelet[709]: I1017 19:41:25.970627     709 scope.go:117] "RemoveContainer" containerID="e4f1663501bd0ce9b6a40d5275c3f5abd2e7c8066a89e0b36bda675dda265af6"
	Oct 17 19:41:25 no-preload-171807 kubelet[709]: E1017 19:41:25.970845     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fqmgm_kubernetes-dashboard(34b498e2-7851-4f05-b246-fe3c7cebbaf9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fqmgm" podUID="34b498e2-7851-4f05-b246-fe3c7cebbaf9"
	Oct 17 19:41:31 no-preload-171807 kubelet[709]: I1017 19:41:31.105352     709 scope.go:117] "RemoveContainer" containerID="e4f1663501bd0ce9b6a40d5275c3f5abd2e7c8066a89e0b36bda675dda265af6"
	Oct 17 19:41:31 no-preload-171807 kubelet[709]: E1017 19:41:31.105593     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fqmgm_kubernetes-dashboard(34b498e2-7851-4f05-b246-fe3c7cebbaf9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fqmgm" podUID="34b498e2-7851-4f05-b246-fe3c7cebbaf9"
	Oct 17 19:41:34 no-preload-171807 kubelet[709]: I1017 19:41:34.517290     709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4kqlp" podStartSLOduration=8.946824263 podStartE2EDuration="14.517268125s" podCreationTimestamp="2025-10-17 19:41:20 +0000 UTC" firstStartedPulling="2025-10-17 19:41:21.275437289 +0000 UTC m=+6.452303591" lastFinishedPulling="2025-10-17 19:41:26.845881165 +0000 UTC m=+12.022747453" observedRunningTime="2025-10-17 19:41:26.985665623 +0000 UTC m=+12.162531930" watchObservedRunningTime="2025-10-17 19:41:34.517268125 +0000 UTC m=+19.694134433"
	Oct 17 19:41:44 no-preload-171807 kubelet[709]: I1017 19:41:44.914495     709 scope.go:117] "RemoveContainer" containerID="e4f1663501bd0ce9b6a40d5275c3f5abd2e7c8066a89e0b36bda675dda265af6"
	Oct 17 19:41:45 no-preload-171807 kubelet[709]: I1017 19:41:45.022814     709 scope.go:117] "RemoveContainer" containerID="e4f1663501bd0ce9b6a40d5275c3f5abd2e7c8066a89e0b36bda675dda265af6"
	Oct 17 19:41:45 no-preload-171807 kubelet[709]: I1017 19:41:45.023064     709 scope.go:117] "RemoveContainer" containerID="b00a978ba6c2ba2beea9f7bc631934a305976c12436f5a13772cbbabda6c49c3"
	Oct 17 19:41:45 no-preload-171807 kubelet[709]: E1017 19:41:45.023273     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fqmgm_kubernetes-dashboard(34b498e2-7851-4f05-b246-fe3c7cebbaf9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fqmgm" podUID="34b498e2-7851-4f05-b246-fe3c7cebbaf9"
	Oct 17 19:41:49 no-preload-171807 kubelet[709]: I1017 19:41:49.037192     709 scope.go:117] "RemoveContainer" containerID="a2184126b0f26d397ffbbb79f922291dad5e971092ca6caa2f3d7d4cb54166c9"
	Oct 17 19:41:51 no-preload-171807 kubelet[709]: I1017 19:41:51.106349     709 scope.go:117] "RemoveContainer" containerID="b00a978ba6c2ba2beea9f7bc631934a305976c12436f5a13772cbbabda6c49c3"
	Oct 17 19:41:51 no-preload-171807 kubelet[709]: E1017 19:41:51.106586     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fqmgm_kubernetes-dashboard(34b498e2-7851-4f05-b246-fe3c7cebbaf9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fqmgm" podUID="34b498e2-7851-4f05-b246-fe3c7cebbaf9"
	Oct 17 19:42:03 no-preload-171807 kubelet[709]: I1017 19:42:03.913589     709 scope.go:117] "RemoveContainer" containerID="b00a978ba6c2ba2beea9f7bc631934a305976c12436f5a13772cbbabda6c49c3"
	Oct 17 19:42:03 no-preload-171807 kubelet[709]: E1017 19:42:03.913826     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fqmgm_kubernetes-dashboard(34b498e2-7851-4f05-b246-fe3c7cebbaf9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fqmgm" podUID="34b498e2-7851-4f05-b246-fe3c7cebbaf9"
	Oct 17 19:42:05 no-preload-171807 kubelet[709]: I1017 19:42:05.686153     709 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 17 19:42:05 no-preload-171807 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 19:42:05 no-preload-171807 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 19:42:05 no-preload-171807 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 17 19:42:05 no-preload-171807 systemd[1]: kubelet.service: Consumed 1.697s CPU time.
	
	
	==> kubernetes-dashboard [e35ca6f1c73b7d72497bda5266b591c7c57a2476a6ec5fa6c61165d1cdde7cad] <==
	2025/10/17 19:41:26 Starting overwatch
	2025/10/17 19:41:26 Using namespace: kubernetes-dashboard
	2025/10/17 19:41:26 Using in-cluster config to connect to apiserver
	2025/10/17 19:41:26 Using secret token for csrf signing
	2025/10/17 19:41:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 19:41:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 19:41:26 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 19:41:26 Generating JWE encryption key
	2025/10/17 19:41:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 19:41:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 19:41:27 Initializing JWE encryption key from synchronized object
	2025/10/17 19:41:27 Creating in-cluster Sidecar client
	2025/10/17 19:41:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 19:41:27 Serving insecurely on HTTP port: 9090
	2025/10/17 19:41:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [a2184126b0f26d397ffbbb79f922291dad5e971092ca6caa2f3d7d4cb54166c9] <==
	I1017 19:41:18.281213       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 19:41:48.283837       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c38fc3e3e5753ebdf5ea7669f5ca235a915e6ca85e02b4d3a1dd0a1412bfb0b3] <==
	I1017 19:41:49.393339       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 19:41:49.401318       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 19:41:49.401366       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 19:41:49.423343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:41:52.878431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:41:57.139296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:00.738518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:03.793348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:06.816143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:06.821548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 19:42:06.821805       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 19:42:06.821990       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-171807_b4b7f7e0-b06a-4c76-b258-b44189a5885e!
	I1017 19:42:06.822010       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e74a4cee-e08d-4268-aaf1-9d923d1555d4", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-171807_b4b7f7e0-b06a-4c76-b258-b44189a5885e became leader
	W1017 19:42:06.824623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:06.828238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 19:42:06.922252       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-171807_b4b7f7e0-b06a-4c76-b258-b44189a5885e!
	W1017 19:42:08.832554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:08.837170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:10.840300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:10.844451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-171807 -n no-preload-171807
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-171807 -n no-preload-171807: exit status 2 (354.220339ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-171807 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-599709 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-599709 --alsologtostderr -v=1: exit status 80 (2.068199075s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-599709 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:42:32.705511  756587 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:42:32.705625  756587 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:42:32.705632  756587 out.go:374] Setting ErrFile to fd 2...
	I1017 19:42:32.705639  756587 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:42:32.705951  756587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:42:32.706286  756587 out.go:368] Setting JSON to false
	I1017 19:42:32.706360  756587 mustload.go:65] Loading cluster: embed-certs-599709
	I1017 19:42:32.706946  756587 config.go:182] Loaded profile config "embed-certs-599709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:42:32.707583  756587 cli_runner.go:164] Run: docker container inspect embed-certs-599709 --format={{.State.Status}}
	I1017 19:42:32.734641  756587 host.go:66] Checking if "embed-certs-599709" exists ...
	I1017 19:42:32.735021  756587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:42:32.831917  756587 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-17 19:42:32.81972134 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:42:32.832856  756587 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-599709 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 19:42:32.834783  756587 out.go:179] * Pausing node embed-certs-599709 ... 
	I1017 19:42:32.836015  756587 host.go:66] Checking if "embed-certs-599709" exists ...
	I1017 19:42:32.836390  756587 ssh_runner.go:195] Run: systemctl --version
	I1017 19:42:32.836446  756587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-599709
	I1017 19:42:32.860080  756587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/embed-certs-599709/id_rsa Username:docker}
	I1017 19:42:32.963828  756587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:42:32.987772  756587 pause.go:52] kubelet running: true
	I1017 19:42:32.987853  756587 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 19:42:33.201150  756587 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 19:42:33.201284  756587 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 19:42:33.283108  756587 cri.go:89] found id: "b1cf4808c3c06930bdd6510b062d88c619d479cded519a43398ca9bd108ed9a3"
	I1017 19:42:33.283139  756587 cri.go:89] found id: "c71792b79c961b845a6b99c5bcccfc46f9de23c2206aa747b29a31c68e849961"
	I1017 19:42:33.283153  756587 cri.go:89] found id: "08a97449e70420f352437d2e7b662ce49460b732cd9f801bfcc38ab73978576f"
	I1017 19:42:33.283158  756587 cri.go:89] found id: "86d44709917efa4ea7da20b25e01fb82cc2d82f4b052c55a4b04e72bf8d2ac0d"
	I1017 19:42:33.283161  756587 cri.go:89] found id: "0441540fc07f4acf1b274f1e141b3f57fb47af829295b17c4515e4481635ddec"
	I1017 19:42:33.283165  756587 cri.go:89] found id: "9229cd3e223ec817b5885265f0c88a1b78735a34ba5f6a4b4723d3fee1cf4d34"
	I1017 19:42:33.283169  756587 cri.go:89] found id: "eeadd287c3bf74a34717467fb1adfa03126b04b4a20a9dd1ecd6ef8e5fa4c43a"
	I1017 19:42:33.283173  756587 cri.go:89] found id: "3320bb4791740d09b759229a773dc3c8b5f46f29bca00968f79441653fafafce"
	I1017 19:42:33.283177  756587 cri.go:89] found id: "eccf39ad86610aefaf8eaf41939eb4ad09f3ebbd9c6afbe871000f0047c47987"
	I1017 19:42:33.283187  756587 cri.go:89] found id: "68f94505306836a4afa09de153f7145228fbb668039a0e3b489f7fe6b12a5b07"
	I1017 19:42:33.283197  756587 cri.go:89] found id: "41a78ce660ab9948eef365dc04305d4032e755474c32025ba6c3e26c56f866ca"
	I1017 19:42:33.283201  756587 cri.go:89] found id: ""
	I1017 19:42:33.283252  756587 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:42:33.298012  756587 retry.go:31] will retry after 148.873865ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:42:33Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:42:33.447463  756587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:42:33.463665  756587 pause.go:52] kubelet running: false
	I1017 19:42:33.463796  756587 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 19:42:33.683258  756587 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 19:42:33.683429  756587 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 19:42:33.779604  756587 cri.go:89] found id: "b1cf4808c3c06930bdd6510b062d88c619d479cded519a43398ca9bd108ed9a3"
	I1017 19:42:33.779667  756587 cri.go:89] found id: "c71792b79c961b845a6b99c5bcccfc46f9de23c2206aa747b29a31c68e849961"
	I1017 19:42:33.779673  756587 cri.go:89] found id: "08a97449e70420f352437d2e7b662ce49460b732cd9f801bfcc38ab73978576f"
	I1017 19:42:33.779678  756587 cri.go:89] found id: "86d44709917efa4ea7da20b25e01fb82cc2d82f4b052c55a4b04e72bf8d2ac0d"
	I1017 19:42:33.779700  756587 cri.go:89] found id: "0441540fc07f4acf1b274f1e141b3f57fb47af829295b17c4515e4481635ddec"
	I1017 19:42:33.779706  756587 cri.go:89] found id: "9229cd3e223ec817b5885265f0c88a1b78735a34ba5f6a4b4723d3fee1cf4d34"
	I1017 19:42:33.779710  756587 cri.go:89] found id: "eeadd287c3bf74a34717467fb1adfa03126b04b4a20a9dd1ecd6ef8e5fa4c43a"
	I1017 19:42:33.779714  756587 cri.go:89] found id: "3320bb4791740d09b759229a773dc3c8b5f46f29bca00968f79441653fafafce"
	I1017 19:42:33.779718  756587 cri.go:89] found id: "eccf39ad86610aefaf8eaf41939eb4ad09f3ebbd9c6afbe871000f0047c47987"
	I1017 19:42:33.779727  756587 cri.go:89] found id: "68f94505306836a4afa09de153f7145228fbb668039a0e3b489f7fe6b12a5b07"
	I1017 19:42:33.779732  756587 cri.go:89] found id: "41a78ce660ab9948eef365dc04305d4032e755474c32025ba6c3e26c56f866ca"
	I1017 19:42:33.779736  756587 cri.go:89] found id: ""
	I1017 19:42:33.779797  756587 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:42:33.796333  756587 retry.go:31] will retry after 556.64468ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:42:33Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:42:34.353892  756587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:42:34.373598  756587 pause.go:52] kubelet running: false
	I1017 19:42:34.373667  756587 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 19:42:34.574599  756587 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 19:42:34.574704  756587 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 19:42:34.670301  756587 cri.go:89] found id: "b1cf4808c3c06930bdd6510b062d88c619d479cded519a43398ca9bd108ed9a3"
	I1017 19:42:34.670377  756587 cri.go:89] found id: "c71792b79c961b845a6b99c5bcccfc46f9de23c2206aa747b29a31c68e849961"
	I1017 19:42:34.670389  756587 cri.go:89] found id: "08a97449e70420f352437d2e7b662ce49460b732cd9f801bfcc38ab73978576f"
	I1017 19:42:34.670394  756587 cri.go:89] found id: "86d44709917efa4ea7da20b25e01fb82cc2d82f4b052c55a4b04e72bf8d2ac0d"
	I1017 19:42:34.670398  756587 cri.go:89] found id: "0441540fc07f4acf1b274f1e141b3f57fb47af829295b17c4515e4481635ddec"
	I1017 19:42:34.670403  756587 cri.go:89] found id: "9229cd3e223ec817b5885265f0c88a1b78735a34ba5f6a4b4723d3fee1cf4d34"
	I1017 19:42:34.670407  756587 cri.go:89] found id: "eeadd287c3bf74a34717467fb1adfa03126b04b4a20a9dd1ecd6ef8e5fa4c43a"
	I1017 19:42:34.670411  756587 cri.go:89] found id: "3320bb4791740d09b759229a773dc3c8b5f46f29bca00968f79441653fafafce"
	I1017 19:42:34.670415  756587 cri.go:89] found id: "eccf39ad86610aefaf8eaf41939eb4ad09f3ebbd9c6afbe871000f0047c47987"
	I1017 19:42:34.670425  756587 cri.go:89] found id: "68f94505306836a4afa09de153f7145228fbb668039a0e3b489f7fe6b12a5b07"
	I1017 19:42:34.670433  756587 cri.go:89] found id: "41a78ce660ab9948eef365dc04305d4032e755474c32025ba6c3e26c56f866ca"
	I1017 19:42:34.670437  756587 cri.go:89] found id: ""
	I1017 19:42:34.670481  756587 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:42:34.690856  756587 out.go:203] 
	W1017 19:42:34.692626  756587 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:42:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:42:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:42:34.692900  756587 out.go:285] * 
	* 
	W1017 19:42:34.702049  756587 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:42:34.705141  756587 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-599709 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-599709
helpers_test.go:243: (dbg) docker inspect embed-certs-599709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "65267e6fd2cc43de60ecf8ea56b9d12897767c145eb0285daf0bf3755eb93590",
	        "Created": "2025-10-17T19:40:26.431376563Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 741367,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:41:29.018758586Z",
	            "FinishedAt": "2025-10-17T19:41:27.760245729Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/65267e6fd2cc43de60ecf8ea56b9d12897767c145eb0285daf0bf3755eb93590/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/65267e6fd2cc43de60ecf8ea56b9d12897767c145eb0285daf0bf3755eb93590/hostname",
	        "HostsPath": "/var/lib/docker/containers/65267e6fd2cc43de60ecf8ea56b9d12897767c145eb0285daf0bf3755eb93590/hosts",
	        "LogPath": "/var/lib/docker/containers/65267e6fd2cc43de60ecf8ea56b9d12897767c145eb0285daf0bf3755eb93590/65267e6fd2cc43de60ecf8ea56b9d12897767c145eb0285daf0bf3755eb93590-json.log",
	        "Name": "/embed-certs-599709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-599709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-599709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "65267e6fd2cc43de60ecf8ea56b9d12897767c145eb0285daf0bf3755eb93590",
	                "LowerDir": "/var/lib/docker/overlay2/dd68b6694848505f3dbe0f1fc8175ad12403dd916c34260859d346ccfc6326c8-init/diff:/var/lib/docker/overlay2/dbfb6a42e05d15debefb7c829b0dbabbe558b70da40f1ab4f30d27e0dda96088/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dd68b6694848505f3dbe0f1fc8175ad12403dd916c34260859d346ccfc6326c8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dd68b6694848505f3dbe0f1fc8175ad12403dd916c34260859d346ccfc6326c8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dd68b6694848505f3dbe0f1fc8175ad12403dd916c34260859d346ccfc6326c8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-599709",
	                "Source": "/var/lib/docker/volumes/embed-certs-599709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-599709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-599709",
	                "name.minikube.sigs.k8s.io": "embed-certs-599709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d6634a3534ea8f49afc8c978745d865c99673b7e5e4804fe71f574c5917c31d3",
	            "SandboxKey": "/var/run/docker/netns/d6634a3534ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-599709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:3a:ef:83:50:11",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "116cc729b1af4d4ec359cb40c0efa07f90c3ee85e9adaa14764bb2ee64de2228",
	                    "EndpointID": "83df366d801b14a0c479c2214762c83d06bc26d0e49ee1473e6ae4f0c11f5c4d",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-599709",
	                        "65267e6fd2cc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-599709 -n embed-certs-599709
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-599709 -n embed-certs-599709: exit status 2 (497.47296ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-599709 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-599709 logs -n 25: (1.509057546s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-907112 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable metrics-server -p no-preload-171807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │                     │
	│ stop    │ -p no-preload-171807 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p no-preload-171807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-171807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-599709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	│ stop    │ -p embed-certs-599709 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p embed-certs-599709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ start   │ -p embed-certs-599709 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:42 UTC │
	│ image   │ old-k8s-version-907112 image list --format=json                                                                                                                                                                                               │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ pause   │ -p old-k8s-version-907112 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	│ delete  │ -p old-k8s-version-907112                                                                                                                                                                                                                     │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-907112                                                                                                                                                                                                                     │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ delete  │ -p disable-driver-mounts-220565                                                                                                                                                                                                               │ disable-driver-mounts-220565 │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ start   │ -p default-k8s-diff-port-112878 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:42 UTC │
	│ image   │ no-preload-171807 image list --format=json                                                                                                                                                                                                    │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ pause   │ -p no-preload-171807 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ delete  │ -p no-preload-171807                                                                                                                                                                                                                          │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ delete  │ -p no-preload-171807                                                                                                                                                                                                                          │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ start   │ -p newest-cni-438547 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438547            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ start   │ -p kubernetes-upgrade-137244 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-137244    │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ start   │ -p kubernetes-upgrade-137244 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-137244    │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ image   │ embed-certs-599709 image list --format=json                                                                                                                                                                                                   │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ pause   │ -p embed-certs-599709 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-112878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:42:32
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:42:32.284642  756339 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:42:32.284938  756339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:42:32.284948  756339 out.go:374] Setting ErrFile to fd 2...
	I1017 19:42:32.284952  756339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:42:32.285167  756339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:42:32.285627  756339 out.go:368] Setting JSON to false
	I1017 19:42:32.286955  756339 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12291,"bootTime":1760717861,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:42:32.287079  756339 start.go:141] virtualization: kvm guest
	I1017 19:42:32.288866  756339 out.go:179] * [kubernetes-upgrade-137244] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:42:32.290361  756339 notify.go:220] Checking for updates...
	I1017 19:42:32.290381  756339 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:42:32.291717  756339 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:42:32.293887  756339 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:42:32.295817  756339 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 19:42:32.297786  756339 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:42:32.299504  756339 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:42:32.302054  756339 config.go:182] Loaded profile config "kubernetes-upgrade-137244": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:42:32.302866  756339 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:42:32.330847  756339 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:42:32.330958  756339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:42:32.404999  756339 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-17 19:42:32.393716398 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:42:32.405120  756339 docker.go:318] overlay module found
	I1017 19:42:32.407813  756339 out.go:179] * Using the docker driver based on existing profile
	I1017 19:42:32.409163  756339 start.go:305] selected driver: docker
	I1017 19:42:32.409186  756339 start.go:925] validating driver "docker" against &{Name:kubernetes-upgrade-137244 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-137244 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:42:32.409310  756339 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:42:32.410021  756339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:42:32.478656  756339 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-17 19:42:32.466730101 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:42:32.479162  756339 cni.go:84] Creating CNI manager for ""
	I1017 19:42:32.479244  756339 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:42:32.479327  756339 start.go:349] cluster config:
	{Name:kubernetes-upgrade-137244 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-137244 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:42:32.481696  756339 out.go:179] * Starting "kubernetes-upgrade-137244" primary control-plane node in "kubernetes-upgrade-137244" cluster
	I1017 19:42:32.483108  756339 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:42:32.484546  756339 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:42:32.485879  756339 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:42:32.485924  756339 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:42:32.485945  756339 cache.go:58] Caching tarball of preloaded images
	I1017 19:42:32.485996  756339 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:42:32.486050  756339 preload.go:233] Found /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:42:32.486066  756339 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:42:32.486222  756339 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/kubernetes-upgrade-137244/config.json ...
	I1017 19:42:32.511005  756339 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:42:32.511029  756339 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:42:32.511049  756339 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:42:32.511082  756339 start.go:360] acquireMachinesLock for kubernetes-upgrade-137244: {Name:mk295f0c37c369f712e9c8f3857f62f6297f3f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:42:32.511151  756339 start.go:364] duration metric: took 47.722µs to acquireMachinesLock for "kubernetes-upgrade-137244"
	I1017 19:42:32.511179  756339 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:42:32.511186  756339 fix.go:54] fixHost starting: 
	I1017 19:42:32.511477  756339 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-137244 --format={{.State.Status}}
	I1017 19:42:32.531731  756339 fix.go:112] recreateIfNeeded on kubernetes-upgrade-137244: state=Running err=<nil>
	W1017 19:42:32.531776  756339 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:42:30.943588  753072 out.go:252]   - Booting up control plane ...
	I1017 19:42:30.943749  753072 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 19:42:30.943877  753072 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 19:42:30.943979  753072 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 19:42:30.958565  753072 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 19:42:30.958772  753072 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 19:42:30.965606  753072 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 19:42:30.965883  753072 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 19:42:30.965992  753072 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 19:42:31.077801  753072 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 19:42:31.077992  753072 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 19:42:32.579046  753072 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501467006s
	I1017 19:42:32.582509  753072 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 19:42:32.582645  753072 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1017 19:42:32.582797  753072 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 19:42:32.582906  753072 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Oct 17 19:41:51 embed-certs-599709 crio[563]: time="2025-10-17T19:41:51.916509303Z" level=info msg="Started container" PID=1748 containerID=5b68e2e60da42953705cbec93f7d42eff4c4084f455b0009cb96a35e50ff851e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n/dashboard-metrics-scraper id=d5e1214a-d1fa-45c2-8e3f-13cd95addf6b name=/runtime.v1.RuntimeService/StartContainer sandboxID=0079e08b246389f561f22d2cd672b23163f6f935a5f77f44b2c7f93667c2e8ac
	Oct 17 19:41:52 embed-certs-599709 crio[563]: time="2025-10-17T19:41:52.862346997Z" level=info msg="Removing container: 6e732383138061518d6fab80051b5f2939e6d4d8e32b105d147db2f432edbe2e" id=99ceb06b-4afd-4a31-a4f1-1c842a453a28 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:41:52 embed-certs-599709 crio[563]: time="2025-10-17T19:41:52.874390887Z" level=info msg="Removed container 6e732383138061518d6fab80051b5f2939e6d4d8e32b105d147db2f432edbe2e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n/dashboard-metrics-scraper" id=99ceb06b-4afd-4a31-a4f1-1c842a453a28 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:42:05 embed-certs-599709 crio[563]: time="2025-10-17T19:42:05.772920813Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b483c6c0-be40-4d3d-b083-10db7ecb0303 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:42:05 embed-certs-599709 crio[563]: time="2025-10-17T19:42:05.774014755Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ae644ebe-088a-401d-8ade-f449860049fa name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:42:05 embed-certs-599709 crio[563]: time="2025-10-17T19:42:05.775127377Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n/dashboard-metrics-scraper" id=9b845a05-a524-4ba8-a5db-55a85d500c62 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:42:05 embed-certs-599709 crio[563]: time="2025-10-17T19:42:05.775433254Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:05 embed-certs-599709 crio[563]: time="2025-10-17T19:42:05.781786311Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:05 embed-certs-599709 crio[563]: time="2025-10-17T19:42:05.78250666Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:05 embed-certs-599709 crio[563]: time="2025-10-17T19:42:05.81894298Z" level=info msg="Created container 68f94505306836a4afa09de153f7145228fbb668039a0e3b489f7fe6b12a5b07: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n/dashboard-metrics-scraper" id=9b845a05-a524-4ba8-a5db-55a85d500c62 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:42:05 embed-certs-599709 crio[563]: time="2025-10-17T19:42:05.819767664Z" level=info msg="Starting container: 68f94505306836a4afa09de153f7145228fbb668039a0e3b489f7fe6b12a5b07" id=be9157be-f18d-444a-ba43-bdb3c8ebab8a name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:42:05 embed-certs-599709 crio[563]: time="2025-10-17T19:42:05.821887938Z" level=info msg="Started container" PID=1758 containerID=68f94505306836a4afa09de153f7145228fbb668039a0e3b489f7fe6b12a5b07 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n/dashboard-metrics-scraper id=be9157be-f18d-444a-ba43-bdb3c8ebab8a name=/runtime.v1.RuntimeService/StartContainer sandboxID=0079e08b246389f561f22d2cd672b23163f6f935a5f77f44b2c7f93667c2e8ac
	Oct 17 19:42:05 embed-certs-599709 crio[563]: time="2025-10-17T19:42:05.902407826Z" level=info msg="Removing container: 5b68e2e60da42953705cbec93f7d42eff4c4084f455b0009cb96a35e50ff851e" id=6b19a09f-d757-4106-a92f-b48c16b00adc name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:42:05 embed-certs-599709 crio[563]: time="2025-10-17T19:42:05.914511848Z" level=info msg="Removed container 5b68e2e60da42953705cbec93f7d42eff4c4084f455b0009cb96a35e50ff851e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n/dashboard-metrics-scraper" id=6b19a09f-d757-4106-a92f-b48c16b00adc name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:42:09 embed-certs-599709 crio[563]: time="2025-10-17T19:42:09.916577311Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=61354ab9-b58d-4104-b0f2-15b3a9bd543d name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:42:09 embed-certs-599709 crio[563]: time="2025-10-17T19:42:09.917799779Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b56311bd-54ab-422a-b4c9-e7406c29abdb name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:42:09 embed-certs-599709 crio[563]: time="2025-10-17T19:42:09.921302409Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=fdca4123-9f1f-430d-b7c9-be08f3cbcd2c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:42:09 embed-certs-599709 crio[563]: time="2025-10-17T19:42:09.921621575Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:09 embed-certs-599709 crio[563]: time="2025-10-17T19:42:09.927156633Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:09 embed-certs-599709 crio[563]: time="2025-10-17T19:42:09.927616408Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c917dfce4e5c09893cfbc1ce785adb2cc67c17c7fdb6a6c68fb0e8b5ca3f079c/merged/etc/passwd: no such file or directory"
	Oct 17 19:42:09 embed-certs-599709 crio[563]: time="2025-10-17T19:42:09.927765503Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c917dfce4e5c09893cfbc1ce785adb2cc67c17c7fdb6a6c68fb0e8b5ca3f079c/merged/etc/group: no such file or directory"
	Oct 17 19:42:09 embed-certs-599709 crio[563]: time="2025-10-17T19:42:09.928218436Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:09 embed-certs-599709 crio[563]: time="2025-10-17T19:42:09.968327073Z" level=info msg="Created container b1cf4808c3c06930bdd6510b062d88c619d479cded519a43398ca9bd108ed9a3: kube-system/storage-provisioner/storage-provisioner" id=fdca4123-9f1f-430d-b7c9-be08f3cbcd2c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:42:09 embed-certs-599709 crio[563]: time="2025-10-17T19:42:09.969315416Z" level=info msg="Starting container: b1cf4808c3c06930bdd6510b062d88c619d479cded519a43398ca9bd108ed9a3" id=3a1a0276-ea3a-4ae1-aae9-04dedefd9237 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:42:09 embed-certs-599709 crio[563]: time="2025-10-17T19:42:09.972256598Z" level=info msg="Started container" PID=1772 containerID=b1cf4808c3c06930bdd6510b062d88c619d479cded519a43398ca9bd108ed9a3 description=kube-system/storage-provisioner/storage-provisioner id=3a1a0276-ea3a-4ae1-aae9-04dedefd9237 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b5a222b88f349ee6aad4aac986c2daf779da4e56f90f7ffa212bdeb168972754
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	b1cf4808c3c06       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago      Running             storage-provisioner         1                   b5a222b88f349       storage-provisioner                          kube-system
	68f9450530683       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago      Exited              dashboard-metrics-scraper   2                   0079e08b24638       dashboard-metrics-scraper-6ffb444bf9-xw42n   kubernetes-dashboard
	41a78ce660ab9       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   49 seconds ago      Running             kubernetes-dashboard        0                   02cefe766e6af       kubernetes-dashboard-855c9754f9-mh7df        kubernetes-dashboard
	c71792b79c961       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   d57c56ca50297       coredns-66bc5c9577-v8hls                     kube-system
	724b28842e066       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   96da245b5f256       busybox                                      default
	08a97449e7042       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   b5a222b88f349       storage-provisioner                          kube-system
	86d44709917ef       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           56 seconds ago      Running             kube-proxy                  0                   928fdd095ceaf       kube-proxy-l2pwz                             kube-system
	0441540fc07f4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   17fd20279adeb       kindnet-sj7sj                                kube-system
	9229cd3e223ec       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   09ba589c11933       kube-controller-manager-embed-certs-599709   kube-system
	eeadd287c3bf7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   b874ce0264ac9       kube-apiserver-embed-certs-599709            kube-system
	3320bb4791740       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   47558cae563fa       kube-scheduler-embed-certs-599709            kube-system
	eccf39ad86610       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   8424a0aef5e0d       etcd-embed-certs-599709                      kube-system
	
	
	==> coredns [c71792b79c961b845a6b99c5bcccfc46f9de23c2206aa747b29a31c68e849961] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49433 - 18909 "HINFO IN 4959494782317338407.7900292932436731498. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.083606372s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-599709
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-599709
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=embed-certs-599709
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_40_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:40:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-599709
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:42:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:42:09 +0000   Fri, 17 Oct 2025 19:40:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:42:09 +0000   Fri, 17 Oct 2025 19:40:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:42:09 +0000   Fri, 17 Oct 2025 19:40:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:42:09 +0000   Fri, 17 Oct 2025 19:40:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-599709
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                4ab96baf-e93c-4e34-b927-fdc987244361
	  Boot ID:                    c8616e78-d085-41cd-a329-f2bcfd9cfa15
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-v8hls                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-embed-certs-599709                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-sj7sj                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-embed-certs-599709             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-embed-certs-599709    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-l2pwz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-embed-certs-599709             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-xw42n    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mh7df         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node embed-certs-599709 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node embed-certs-599709 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node embed-certs-599709 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node embed-certs-599709 event: Registered Node embed-certs-599709 in Controller
	  Normal  NodeReady                98s                kubelet          Node embed-certs-599709 status is now: NodeReady
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node embed-certs-599709 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node embed-certs-599709 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node embed-certs-599709 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                node-controller  Node embed-certs-599709 event: Registered Node embed-certs-599709 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 d1 49 91 03 c2 08 06
	[  +0.000804] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 a9 2b 44 da ae 08 06
	[Oct17 18:59] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022229] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023876] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.024898] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +2.047801] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +4.031525] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:00] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +16.382262] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +32.252567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	
	
	==> etcd [eccf39ad86610aefaf8eaf41939eb4ad09f3ebbd9c6afbe871000f0047c47987] <==
	{"level":"warn","ts":"2025-10-17T19:41:49.595360Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.751538ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-v8hls\" limit:1 ","response":"range_response_count:1 size:5934"}
	{"level":"info","ts":"2025-10-17T19:41:49.595395Z","caller":"traceutil/trace.go:172","msg":"trace[1816212144] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-v8hls; range_end:; response_count:1; response_revision:576; }","duration":"124.804913ms","start":"2025-10-17T19:41:49.470579Z","end":"2025-10-17T19:41:49.595384Z","steps":["trace[1816212144] 'agreement among raft nodes before linearized reading'  (duration: 124.63996ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:41:49.595442Z","caller":"traceutil/trace.go:172","msg":"trace[496287099] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"207.078087ms","start":"2025-10-17T19:41:49.388341Z","end":"2025-10-17T19:41:49.595419Z","steps":["trace[496287099] 'process raft request'  (duration: 206.86249ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:41:49.717418Z","caller":"traceutil/trace.go:172","msg":"trace[1805226362] linearizableReadLoop","detail":"{readStateIndex:605; appliedIndex:605; }","duration":"118.586444ms","start":"2025-10-17T19:41:49.598804Z","end":"2025-10-17T19:41:49.717391Z","steps":["trace[1805226362] 'read index received'  (duration: 118.578335ms)","trace[1805226362] 'applied index is now lower than readState.Index'  (duration: 6.879µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T19:41:49.920164Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"321.332724ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-599709\" limit:1 ","response":"range_response_count:1 size:5685"}
	{"level":"warn","ts":"2025-10-17T19:41:49.920179Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"321.357598ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-599709\" limit:1 ","response":"range_response_count:1 size:8201"}
	{"level":"info","ts":"2025-10-17T19:41:49.920223Z","caller":"traceutil/trace.go:172","msg":"trace[249947990] range","detail":"{range_begin:/registry/minions/embed-certs-599709; range_end:; response_count:1; response_revision:577; }","duration":"321.409167ms","start":"2025-10-17T19:41:49.598802Z","end":"2025-10-17T19:41:49.920211Z","steps":["trace[249947990] 'agreement among raft nodes before linearized reading'  (duration: 118.640276ms)","trace[249947990] 'range keys from in-memory index tree'  (duration: 202.586626ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T19:41:49.920222Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"202.744105ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765553191643303 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:5b3399f3b122e8a6>","response":"size:41"}
	{"level":"info","ts":"2025-10-17T19:41:49.920247Z","caller":"traceutil/trace.go:172","msg":"trace[652169401] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-embed-certs-599709; range_end:; response_count:1; response_revision:577; }","duration":"321.425216ms","start":"2025-10-17T19:41:49.598795Z","end":"2025-10-17T19:41:49.920220Z","steps":["trace[652169401] 'agreement among raft nodes before linearized reading'  (duration: 118.713128ms)","trace[652169401] 'range keys from in-memory index tree'  (duration: 202.552724ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T19:41:49.920277Z","caller":"traceutil/trace.go:172","msg":"trace[2103183253] linearizableReadLoop","detail":"{readStateIndex:606; appliedIndex:605; }","duration":"202.787348ms","start":"2025-10-17T19:41:49.717482Z","end":"2025-10-17T19:41:49.920269Z","steps":["trace[2103183253] 'read index received'  (duration: 37.298µs)","trace[2103183253] 'applied index is now lower than readState.Index'  (duration: 202.749374ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T19:41:49.920257Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-17T19:41:49.598784Z","time spent":"321.464586ms","remote":"127.0.0.1:54464","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":5709,"request content":"key:\"/registry/minions/embed-certs-599709\" limit:1 "}
	{"level":"warn","ts":"2025-10-17T19:41:49.920307Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-17T19:41:49.598779Z","time spent":"321.491697ms","remote":"127.0.0.1:54492","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":1,"response size":8225,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-599709\" limit:1 "}
	{"level":"warn","ts":"2025-10-17T19:41:49.920321Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"318.35049ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T19:41:49.920342Z","caller":"traceutil/trace.go:172","msg":"trace[1576917677] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:577; }","duration":"318.373756ms","start":"2025-10-17T19:41:49.601964Z","end":"2025-10-17T19:41:49.920338Z","steps":["trace[1576917677] 'agreement among raft nodes before linearized reading'  (duration: 318.331253ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:41:49.920355Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-17T19:41:49.601949Z","time spent":"318.403237ms","remote":"127.0.0.1:54176","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-10-17T19:41:49.920302Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-17T19:41:49.596299Z","time spent":"324.000197ms","remote":"127.0.0.1:54230","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2025-10-17T19:41:50.046471Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.79366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" limit:1 ","response":"range_response_count:1 size:420"}
	{"level":"info","ts":"2025-10-17T19:41:50.046529Z","caller":"traceutil/trace.go:172","msg":"trace[379849208] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:578; }","duration":"100.870602ms","start":"2025-10-17T19:41:49.945647Z","end":"2025-10-17T19:41:50.046518Z","steps":["trace[379849208] 'agreement among raft nodes before linearized reading'  (duration: 88.495738ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:41:50.046567Z","caller":"traceutil/trace.go:172","msg":"trace[1264162776] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"116.945751ms","start":"2025-10-17T19:41:49.929596Z","end":"2025-10-17T19:41:50.046542Z","steps":["trace[1264162776] 'process raft request'  (duration: 104.621672ms)","trace[1264162776] 'compare'  (duration: 12.14726ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T19:41:50.271672Z","caller":"traceutil/trace.go:172","msg":"trace[1439312170] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"171.199748ms","start":"2025-10-17T19:41:50.100450Z","end":"2025-10-17T19:41:50.271649Z","steps":["trace[1439312170] 'process raft request'  (duration: 145.884315ms)","trace[1439312170] 'compare'  (duration: 25.197994ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T19:42:19.010527Z","caller":"traceutil/trace.go:172","msg":"trace[1578569325] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"137.301399ms","start":"2025-10-17T19:42:18.873200Z","end":"2025-10-17T19:42:19.010502Z","steps":["trace[1578569325] 'process raft request'  (duration: 137.179644ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:19.208847Z","caller":"traceutil/trace.go:172","msg":"trace[1045870536] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"193.575977ms","start":"2025-10-17T19:42:19.015245Z","end":"2025-10-17T19:42:19.208821Z","steps":["trace[1045870536] 'process raft request'  (duration: 120.896375ms)","trace[1045870536] 'compare'  (duration: 72.565122ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T19:42:19.209415Z","caller":"traceutil/trace.go:172","msg":"trace[1702464014] transaction","detail":"{read_only:false; response_revision:623; number_of_response:1; }","duration":"194.111791ms","start":"2025-10-17T19:42:19.015285Z","end":"2025-10-17T19:42:19.209397Z","steps":["trace[1702464014] 'process raft request'  (duration: 193.976769ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:19.209590Z","caller":"traceutil/trace.go:172","msg":"trace[499014038] transaction","detail":"{read_only:false; response_revision:624; number_of_response:1; }","duration":"192.099308ms","start":"2025-10-17T19:42:19.017478Z","end":"2025-10-17T19:42:19.209577Z","steps":["trace[499014038] 'process raft request'  (duration: 191.880626ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:19.463016Z","caller":"traceutil/trace.go:172","msg":"trace[828346930] transaction","detail":"{read_only:false; response_revision:627; number_of_response:1; }","duration":"101.399638ms","start":"2025-10-17T19:42:19.361599Z","end":"2025-10-17T19:42:19.462999Z","steps":["trace[828346930] 'process raft request'  (duration: 64.432521ms)","trace[828346930] 'compare'  (duration: 36.860103ms)"],"step_count":2}
	
	
	==> kernel <==
	 19:42:36 up  3:24,  0 user,  load average: 3.90, 3.40, 2.19
	Linux embed-certs-599709 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0441540fc07f4acf1b274f1e141b3f57fb47af829295b17c4515e4481635ddec] <==
	I1017 19:41:39.305331       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 19:41:39.305665       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1017 19:41:39.305876       1 main.go:148] setting mtu 1500 for CNI 
	I1017 19:41:39.305895       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 19:41:39.305915       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T19:41:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 19:41:39.603569       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 19:41:39.603615       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 19:41:39.603630       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:41:39.603823       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 19:41:40.104152       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:41:40.104195       1 metrics.go:72] Registering metrics
	I1017 19:41:40.104266       1 controller.go:711] "Syncing nftables rules"
	I1017 19:41:49.603955       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 19:41:49.603998       1 main.go:301] handling current node
	I1017 19:41:59.606815       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 19:41:59.606857       1 main.go:301] handling current node
	I1017 19:42:09.603855       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 19:42:09.603888       1 main.go:301] handling current node
	I1017 19:42:19.603586       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 19:42:19.603620       1 main.go:301] handling current node
	I1017 19:42:29.611770       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 19:42:29.611808       1 main.go:301] handling current node
	
	
	==> kube-apiserver [eeadd287c3bf74a34717467fb1adfa03126b04b4a20a9dd1ecd6ef8e5fa4c43a] <==
	I1017 19:41:38.384891       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 19:41:38.385067       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 19:41:38.384750       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1017 19:41:38.385268       1 aggregator.go:171] initial CRD sync complete...
	I1017 19:41:38.385278       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 19:41:38.385286       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 19:41:38.385292       1 cache.go:39] Caches are synced for autoregister controller
	I1017 19:41:38.385479       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 19:41:38.385521       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 19:41:38.385534       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 19:41:38.385522       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 19:41:38.393311       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 19:41:38.408733       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:41:38.426107       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 19:41:38.680986       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 19:41:38.712353       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 19:41:38.732837       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 19:41:38.741456       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 19:41:38.753963       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 19:41:38.800603       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.218.14"}
	I1017 19:41:38.814418       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.64.45"}
	I1017 19:41:39.286991       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:41:42.114327       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:41:42.163995       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 19:41:42.264708       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [9229cd3e223ec817b5885265f0c88a1b78735a34ba5f6a4b4723d3fee1cf4d34] <==
	I1017 19:41:41.710355       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 19:41:41.710291       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 19:41:41.710308       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 19:41:41.710484       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 19:41:41.710567       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 19:41:41.710573       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 19:41:41.710589       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 19:41:41.710724       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 19:41:41.711757       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 19:41:41.713735       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 19:41:41.714929       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:41:41.730571       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 19:41:41.730572       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:41:41.730647       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 19:41:41.730717       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 19:41:41.730730       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 19:41:41.730739       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 19:41:41.733809       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 19:41:41.736088       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 19:41:41.738439       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 19:41:41.738564       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 19:41:41.738712       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-599709"
	I1017 19:41:41.738819       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 19:41:41.744946       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 19:41:41.747207       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [86d44709917efa4ea7da20b25e01fb82cc2d82f4b052c55a4b04e72bf8d2ac0d] <==
	I1017 19:41:39.174092       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:41:39.228868       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:41:39.329101       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:41:39.329143       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1017 19:41:39.329257       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:41:39.352672       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:41:39.352773       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:41:39.359582       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:41:39.360093       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:41:39.360127       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:41:39.362272       1 config.go:309] "Starting node config controller"
	I1017 19:41:39.362307       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:41:39.362317       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:41:39.362272       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:41:39.362325       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:41:39.362697       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:41:39.362714       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:41:39.362864       1 config.go:200] "Starting service config controller"
	I1017 19:41:39.362941       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:41:39.462769       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:41:39.463866       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 19:41:39.463946       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [3320bb4791740d09b759229a773dc3c8b5f46f29bca00968f79441653fafafce] <==
	I1017 19:41:38.330656       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:41:38.335597       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:41:38.335727       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1017 19:41:38.339194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1017 19:41:38.339351       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 19:41:38.340092       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1017 19:41:38.351084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 19:41:38.351966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 19:41:38.352069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 19:41:38.352347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:41:38.352470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:41:38.352774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:41:38.352902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:41:38.353015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:41:38.353198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 19:41:38.353316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 19:41:38.353442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 19:41:38.353581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 19:41:38.353679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:41:38.353870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:41:38.353983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:41:38.354311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:41:38.354389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 19:41:38.354457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1017 19:41:39.636278       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:41:42 embed-certs-599709 kubelet[721]: I1017 19:41:42.438988     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/57cea4eb-4449-4f85-a911-073e40686fda-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-xw42n\" (UID: \"57cea4eb-4449-4f85-a911-073e40686fda\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n"
	Oct 17 19:41:42 embed-certs-599709 kubelet[721]: I1017 19:41:42.439027     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k74pq\" (UniqueName: \"kubernetes.io/projected/548ef298-e15a-4b09-831b-288b15fb3a90-kube-api-access-k74pq\") pod \"kubernetes-dashboard-855c9754f9-mh7df\" (UID: \"548ef298-e15a-4b09-831b-288b15fb3a90\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mh7df"
	Oct 17 19:41:48 embed-certs-599709 kubelet[721]: I1017 19:41:48.857078     721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 17 19:41:49 embed-certs-599709 kubelet[721]: I1017 19:41:49.597034     721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mh7df" podStartSLOduration=4.057737185 podStartE2EDuration="7.597011826s" podCreationTimestamp="2025-10-17 19:41:42 +0000 UTC" firstStartedPulling="2025-10-17 19:41:42.673439565 +0000 UTC m=+7.009794799" lastFinishedPulling="2025-10-17 19:41:46.212714191 +0000 UTC m=+10.549069440" observedRunningTime="2025-10-17 19:41:46.856337186 +0000 UTC m=+11.192692442" watchObservedRunningTime="2025-10-17 19:41:49.597011826 +0000 UTC m=+13.933367082"
	Oct 17 19:41:51 embed-certs-599709 kubelet[721]: I1017 19:41:51.855648     721 scope.go:117] "RemoveContainer" containerID="6e732383138061518d6fab80051b5f2939e6d4d8e32b105d147db2f432edbe2e"
	Oct 17 19:41:52 embed-certs-599709 kubelet[721]: I1017 19:41:52.860845     721 scope.go:117] "RemoveContainer" containerID="6e732383138061518d6fab80051b5f2939e6d4d8e32b105d147db2f432edbe2e"
	Oct 17 19:41:52 embed-certs-599709 kubelet[721]: I1017 19:41:52.860974     721 scope.go:117] "RemoveContainer" containerID="5b68e2e60da42953705cbec93f7d42eff4c4084f455b0009cb96a35e50ff851e"
	Oct 17 19:41:52 embed-certs-599709 kubelet[721]: E1017 19:41:52.861168     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xw42n_kubernetes-dashboard(57cea4eb-4449-4f85-a911-073e40686fda)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n" podUID="57cea4eb-4449-4f85-a911-073e40686fda"
	Oct 17 19:41:53 embed-certs-599709 kubelet[721]: I1017 19:41:53.866497     721 scope.go:117] "RemoveContainer" containerID="5b68e2e60da42953705cbec93f7d42eff4c4084f455b0009cb96a35e50ff851e"
	Oct 17 19:41:53 embed-certs-599709 kubelet[721]: E1017 19:41:53.866752     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xw42n_kubernetes-dashboard(57cea4eb-4449-4f85-a911-073e40686fda)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n" podUID="57cea4eb-4449-4f85-a911-073e40686fda"
	Oct 17 19:41:54 embed-certs-599709 kubelet[721]: I1017 19:41:54.869362     721 scope.go:117] "RemoveContainer" containerID="5b68e2e60da42953705cbec93f7d42eff4c4084f455b0009cb96a35e50ff851e"
	Oct 17 19:41:54 embed-certs-599709 kubelet[721]: E1017 19:41:54.869556     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xw42n_kubernetes-dashboard(57cea4eb-4449-4f85-a911-073e40686fda)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n" podUID="57cea4eb-4449-4f85-a911-073e40686fda"
	Oct 17 19:42:05 embed-certs-599709 kubelet[721]: I1017 19:42:05.772458     721 scope.go:117] "RemoveContainer" containerID="5b68e2e60da42953705cbec93f7d42eff4c4084f455b0009cb96a35e50ff851e"
	Oct 17 19:42:05 embed-certs-599709 kubelet[721]: I1017 19:42:05.900943     721 scope.go:117] "RemoveContainer" containerID="5b68e2e60da42953705cbec93f7d42eff4c4084f455b0009cb96a35e50ff851e"
	Oct 17 19:42:05 embed-certs-599709 kubelet[721]: I1017 19:42:05.901167     721 scope.go:117] "RemoveContainer" containerID="68f94505306836a4afa09de153f7145228fbb668039a0e3b489f7fe6b12a5b07"
	Oct 17 19:42:05 embed-certs-599709 kubelet[721]: E1017 19:42:05.901396     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xw42n_kubernetes-dashboard(57cea4eb-4449-4f85-a911-073e40686fda)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n" podUID="57cea4eb-4449-4f85-a911-073e40686fda"
	Oct 17 19:42:09 embed-certs-599709 kubelet[721]: I1017 19:42:09.915309     721 scope.go:117] "RemoveContainer" containerID="08a97449e70420f352437d2e7b662ce49460b732cd9f801bfcc38ab73978576f"
	Oct 17 19:42:13 embed-certs-599709 kubelet[721]: I1017 19:42:13.744501     721 scope.go:117] "RemoveContainer" containerID="68f94505306836a4afa09de153f7145228fbb668039a0e3b489f7fe6b12a5b07"
	Oct 17 19:42:13 embed-certs-599709 kubelet[721]: E1017 19:42:13.744747     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xw42n_kubernetes-dashboard(57cea4eb-4449-4f85-a911-073e40686fda)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n" podUID="57cea4eb-4449-4f85-a911-073e40686fda"
	Oct 17 19:42:25 embed-certs-599709 kubelet[721]: I1017 19:42:25.772427     721 scope.go:117] "RemoveContainer" containerID="68f94505306836a4afa09de153f7145228fbb668039a0e3b489f7fe6b12a5b07"
	Oct 17 19:42:25 embed-certs-599709 kubelet[721]: E1017 19:42:25.772697     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xw42n_kubernetes-dashboard(57cea4eb-4449-4f85-a911-073e40686fda)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n" podUID="57cea4eb-4449-4f85-a911-073e40686fda"
	Oct 17 19:42:33 embed-certs-599709 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 19:42:33 embed-certs-599709 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 19:42:33 embed-certs-599709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 17 19:42:33 embed-certs-599709 systemd[1]: kubelet.service: Consumed 1.904s CPU time.
	
	
	==> kubernetes-dashboard [41a78ce660ab9948eef365dc04305d4032e755474c32025ba6c3e26c56f866ca] <==
	2025/10/17 19:41:46 Using namespace: kubernetes-dashboard
	2025/10/17 19:41:46 Using in-cluster config to connect to apiserver
	2025/10/17 19:41:46 Using secret token for csrf signing
	2025/10/17 19:41:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 19:41:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 19:41:46 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 19:41:46 Generating JWE encryption key
	2025/10/17 19:41:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 19:41:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 19:41:46 Initializing JWE encryption key from synchronized object
	2025/10/17 19:41:46 Creating in-cluster Sidecar client
	2025/10/17 19:41:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 19:41:46 Serving insecurely on HTTP port: 9090
	2025/10/17 19:42:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 19:41:46 Starting overwatch
	
	
	==> storage-provisioner [08a97449e70420f352437d2e7b662ce49460b732cd9f801bfcc38ab73978576f] <==
	I1017 19:41:39.141903       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 19:42:09.146139       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b1cf4808c3c06930bdd6510b062d88c619d479cded519a43398ca9bd108ed9a3] <==
	I1017 19:42:09.989113       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 19:42:09.998789       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 19:42:09.998857       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 19:42:10.004599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:13.460571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:17.720889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:21.319998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:24.374132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:27.397296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:27.402917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 19:42:27.403110       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 19:42:27.403309       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-599709_9be73794-60ef-46e8-b1d6-e500d86aaa0c!
	I1017 19:42:27.403609       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa1d4ee3-e6c8-4c5a-aa5e-ad86f7d4d22b", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-599709_9be73794-60ef-46e8-b1d6-e500d86aaa0c became leader
	W1017 19:42:27.406635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:27.412096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 19:42:27.503494       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-599709_9be73794-60ef-46e8-b1d6-e500d86aaa0c!
	W1017 19:42:29.417151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:29.423193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:31.427560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:31.434062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:33.437245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:33.441675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:35.445994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:35.451796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-599709 -n embed-certs-599709
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-599709 -n embed-certs-599709: exit status 2 (399.874976ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-599709 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-599709
helpers_test.go:243: (dbg) docker inspect embed-certs-599709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "65267e6fd2cc43de60ecf8ea56b9d12897767c145eb0285daf0bf3755eb93590",
	        "Created": "2025-10-17T19:40:26.431376563Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 741367,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:41:29.018758586Z",
	            "FinishedAt": "2025-10-17T19:41:27.760245729Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/65267e6fd2cc43de60ecf8ea56b9d12897767c145eb0285daf0bf3755eb93590/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/65267e6fd2cc43de60ecf8ea56b9d12897767c145eb0285daf0bf3755eb93590/hostname",
	        "HostsPath": "/var/lib/docker/containers/65267e6fd2cc43de60ecf8ea56b9d12897767c145eb0285daf0bf3755eb93590/hosts",
	        "LogPath": "/var/lib/docker/containers/65267e6fd2cc43de60ecf8ea56b9d12897767c145eb0285daf0bf3755eb93590/65267e6fd2cc43de60ecf8ea56b9d12897767c145eb0285daf0bf3755eb93590-json.log",
	        "Name": "/embed-certs-599709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-599709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-599709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "65267e6fd2cc43de60ecf8ea56b9d12897767c145eb0285daf0bf3755eb93590",
	                "LowerDir": "/var/lib/docker/overlay2/dd68b6694848505f3dbe0f1fc8175ad12403dd916c34260859d346ccfc6326c8-init/diff:/var/lib/docker/overlay2/dbfb6a42e05d15debefb7c829b0dbabbe558b70da40f1ab4f30d27e0dda96088/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dd68b6694848505f3dbe0f1fc8175ad12403dd916c34260859d346ccfc6326c8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dd68b6694848505f3dbe0f1fc8175ad12403dd916c34260859d346ccfc6326c8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dd68b6694848505f3dbe0f1fc8175ad12403dd916c34260859d346ccfc6326c8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-599709",
	                "Source": "/var/lib/docker/volumes/embed-certs-599709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-599709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-599709",
	                "name.minikube.sigs.k8s.io": "embed-certs-599709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d6634a3534ea8f49afc8c978745d865c99673b7e5e4804fe71f574c5917c31d3",
	            "SandboxKey": "/var/run/docker/netns/d6634a3534ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-599709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:3a:ef:83:50:11",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "116cc729b1af4d4ec359cb40c0efa07f90c3ee85e9adaa14764bb2ee64de2228",
	                    "EndpointID": "83df366d801b14a0c479c2214762c83d06bc26d0e49ee1473e6ae4f0c11f5c4d",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-599709",
	                        "65267e6fd2cc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-599709 -n embed-certs-599709
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-599709 -n embed-certs-599709: exit status 2 (359.207421ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-599709 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-599709 logs -n 25: (1.433567578s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p no-preload-171807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │                     │
	│ stop    │ -p no-preload-171807 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p no-preload-171807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-171807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-599709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	│ stop    │ -p embed-certs-599709 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p embed-certs-599709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ start   │ -p embed-certs-599709 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:42 UTC │
	│ image   │ old-k8s-version-907112 image list --format=json                                                                                                                                                                                               │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ pause   │ -p old-k8s-version-907112 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	│ delete  │ -p old-k8s-version-907112                                                                                                                                                                                                                     │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-907112                                                                                                                                                                                                                     │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ delete  │ -p disable-driver-mounts-220565                                                                                                                                                                                                               │ disable-driver-mounts-220565 │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ start   │ -p default-k8s-diff-port-112878 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:42 UTC │
	│ image   │ no-preload-171807 image list --format=json                                                                                                                                                                                                    │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ pause   │ -p no-preload-171807 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ delete  │ -p no-preload-171807                                                                                                                                                                                                                          │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ delete  │ -p no-preload-171807                                                                                                                                                                                                                          │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ start   │ -p newest-cni-438547 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438547            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ start   │ -p kubernetes-upgrade-137244 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-137244    │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ start   │ -p kubernetes-upgrade-137244 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-137244    │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ image   │ embed-certs-599709 image list --format=json                                                                                                                                                                                                   │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ pause   │ -p embed-certs-599709 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-112878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-112878 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:42:32
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:42:32.284642  756339 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:42:32.284938  756339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:42:32.284948  756339 out.go:374] Setting ErrFile to fd 2...
	I1017 19:42:32.284952  756339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:42:32.285167  756339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:42:32.285627  756339 out.go:368] Setting JSON to false
	I1017 19:42:32.286955  756339 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12291,"bootTime":1760717861,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:42:32.287079  756339 start.go:141] virtualization: kvm guest
	I1017 19:42:32.288866  756339 out.go:179] * [kubernetes-upgrade-137244] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:42:32.290361  756339 notify.go:220] Checking for updates...
	I1017 19:42:32.290381  756339 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:42:32.291717  756339 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:42:32.293887  756339 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:42:32.295817  756339 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 19:42:32.297786  756339 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:42:32.299504  756339 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:42:32.302054  756339 config.go:182] Loaded profile config "kubernetes-upgrade-137244": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:42:32.302866  756339 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:42:32.330847  756339 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:42:32.330958  756339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:42:32.404999  756339 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-17 19:42:32.393716398 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:42:32.405120  756339 docker.go:318] overlay module found
	I1017 19:42:32.407813  756339 out.go:179] * Using the docker driver based on existing profile
	I1017 19:42:32.409163  756339 start.go:305] selected driver: docker
	I1017 19:42:32.409186  756339 start.go:925] validating driver "docker" against &{Name:kubernetes-upgrade-137244 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-137244 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:42:32.409310  756339 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:42:32.410021  756339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:42:32.478656  756339 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-17 19:42:32.466730101 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:42:32.479162  756339 cni.go:84] Creating CNI manager for ""
	I1017 19:42:32.479244  756339 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:42:32.479327  756339 start.go:349] cluster config:
	{Name:kubernetes-upgrade-137244 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-137244 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:42:32.481696  756339 out.go:179] * Starting "kubernetes-upgrade-137244" primary control-plane node in "kubernetes-upgrade-137244" cluster
	I1017 19:42:32.483108  756339 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:42:32.484546  756339 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:42:32.485879  756339 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:42:32.485924  756339 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:42:32.485945  756339 cache.go:58] Caching tarball of preloaded images
	I1017 19:42:32.485996  756339 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:42:32.486050  756339 preload.go:233] Found /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:42:32.486066  756339 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:42:32.486222  756339 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/kubernetes-upgrade-137244/config.json ...
	I1017 19:42:32.511005  756339 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:42:32.511029  756339 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:42:32.511049  756339 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:42:32.511082  756339 start.go:360] acquireMachinesLock for kubernetes-upgrade-137244: {Name:mk295f0c37c369f712e9c8f3857f62f6297f3f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:42:32.511151  756339 start.go:364] duration metric: took 47.722µs to acquireMachinesLock for "kubernetes-upgrade-137244"
	I1017 19:42:32.511179  756339 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:42:32.511186  756339 fix.go:54] fixHost starting: 
	I1017 19:42:32.511477  756339 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-137244 --format={{.State.Status}}
	I1017 19:42:32.531731  756339 fix.go:112] recreateIfNeeded on kubernetes-upgrade-137244: state=Running err=<nil>
	W1017 19:42:32.531776  756339 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:42:30.943588  753072 out.go:252]   - Booting up control plane ...
	I1017 19:42:30.943749  753072 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 19:42:30.943877  753072 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 19:42:30.943979  753072 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 19:42:30.958565  753072 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 19:42:30.958772  753072 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 19:42:30.965606  753072 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 19:42:30.965883  753072 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 19:42:30.965992  753072 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 19:42:31.077801  753072 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 19:42:31.077992  753072 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 19:42:32.579046  753072 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501467006s
	I1017 19:42:32.582509  753072 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 19:42:32.582645  753072 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1017 19:42:32.582797  753072 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 19:42:32.582906  753072 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 19:42:32.533730  756339 out.go:252] * Updating the running docker "kubernetes-upgrade-137244" container ...
	I1017 19:42:32.533769  756339 machine.go:93] provisionDockerMachine start ...
	I1017 19:42:32.533867  756339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-137244
	I1017 19:42:32.553341  756339 main.go:141] libmachine: Using SSH client type: native
	I1017 19:42:32.553626  756339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1017 19:42:32.553645  756339 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:42:32.700579  756339 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-137244
	
	I1017 19:42:32.700619  756339 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-137244"
	I1017 19:42:32.700812  756339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-137244
	I1017 19:42:32.728436  756339 main.go:141] libmachine: Using SSH client type: native
	I1017 19:42:32.728862  756339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1017 19:42:32.728886  756339 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-137244 && echo "kubernetes-upgrade-137244" | sudo tee /etc/hostname
	I1017 19:42:32.905350  756339 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-137244
	
	I1017 19:42:32.905440  756339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-137244
	I1017 19:42:32.927675  756339 main.go:141] libmachine: Using SSH client type: native
	I1017 19:42:32.927976  756339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1017 19:42:32.927994  756339 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-137244' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-137244/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-137244' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:42:33.073890  756339 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:42:33.073925  756339 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-492109/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-492109/.minikube}
	I1017 19:42:33.073951  756339 ubuntu.go:190] setting up certificates
	I1017 19:42:33.073966  756339 provision.go:84] configureAuth start
	I1017 19:42:33.074033  756339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-137244
	I1017 19:42:33.098748  756339 provision.go:143] copyHostCerts
	I1017 19:42:33.098821  756339 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem, removing ...
	I1017 19:42:33.098844  756339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem
	I1017 19:42:33.098931  756339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem (1078 bytes)
	I1017 19:42:33.099067  756339 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem, removing ...
	I1017 19:42:33.099078  756339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem
	I1017 19:42:33.099115  756339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem (1123 bytes)
	I1017 19:42:33.099197  756339 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem, removing ...
	I1017 19:42:33.099207  756339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem
	I1017 19:42:33.099240  756339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem (1679 bytes)
	I1017 19:42:33.099303  756339 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-137244 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-137244 localhost minikube]
	I1017 19:42:33.878765  756339 provision.go:177] copyRemoteCerts
	I1017 19:42:33.878826  756339 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:42:33.878875  756339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-137244
	I1017 19:42:33.900078  756339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/kubernetes-upgrade-137244/id_rsa Username:docker}
	I1017 19:42:34.010366  756339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 19:42:34.037013  756339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1017 19:42:34.062650  756339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:42:34.089303  756339 provision.go:87] duration metric: took 1.015304837s to configureAuth
	I1017 19:42:34.089336  756339 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:42:34.089554  756339 config.go:182] Loaded profile config "kubernetes-upgrade-137244": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:42:34.089677  756339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-137244
	I1017 19:42:34.115168  756339 main.go:141] libmachine: Using SSH client type: native
	I1017 19:42:34.115512  756339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1017 19:42:34.115540  756339 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:42:34.773169  756339 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:42:34.773211  756339 machine.go:96] duration metric: took 2.239432404s to provisionDockerMachine
	I1017 19:42:34.773227  756339 start.go:293] postStartSetup for "kubernetes-upgrade-137244" (driver="docker")
	I1017 19:42:34.773241  756339 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:42:34.773305  756339 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:42:34.773357  756339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-137244
	I1017 19:42:34.821245  756339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/kubernetes-upgrade-137244/id_rsa Username:docker}
	I1017 19:42:34.951415  756339 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:42:34.958690  756339 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:42:34.958747  756339 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:42:34.958762  756339 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/addons for local assets ...
	I1017 19:42:34.958826  756339 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/files for local assets ...
	I1017 19:42:34.958949  756339 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem -> 4957252.pem in /etc/ssl/certs
	I1017 19:42:34.959082  756339 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:42:34.978749  756339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem --> /etc/ssl/certs/4957252.pem (1708 bytes)
	I1017 19:42:35.006289  756339 start.go:296] duration metric: took 233.042197ms for postStartSetup
	I1017 19:42:35.006402  756339 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:42:35.006463  756339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-137244
	I1017 19:42:35.027893  756339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/kubernetes-upgrade-137244/id_rsa Username:docker}
	I1017 19:42:35.133779  756339 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:42:35.141204  756339 fix.go:56] duration metric: took 2.629995844s for fixHost
	I1017 19:42:35.141242  756339 start.go:83] releasing machines lock for "kubernetes-upgrade-137244", held for 2.630071038s
	I1017 19:42:35.141331  756339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-137244
	I1017 19:42:35.177072  756339 ssh_runner.go:195] Run: cat /version.json
	I1017 19:42:35.177135  756339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-137244
	I1017 19:42:35.177350  756339 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:42:35.177425  756339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-137244
	I1017 19:42:35.203845  756339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/kubernetes-upgrade-137244/id_rsa Username:docker}
	I1017 19:42:35.205299  756339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/kubernetes-upgrade-137244/id_rsa Username:docker}
	I1017 19:42:35.334197  756339 ssh_runner.go:195] Run: systemctl --version
	I1017 19:42:35.692277  756339 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:42:35.742038  756339 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:42:35.747977  756339 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:42:35.748047  756339 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:42:35.757817  756339 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:42:35.757843  756339 start.go:495] detecting cgroup driver to use...
	I1017 19:42:35.757882  756339 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 19:42:35.757937  756339 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:42:35.776708  756339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:42:35.792783  756339 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:42:35.792848  756339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:42:35.811919  756339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:42:35.829817  756339 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:42:35.959934  756339 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:42:36.112252  756339 docker.go:234] disabling docker service ...
	I1017 19:42:36.112324  756339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:42:36.132034  756339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:42:36.150898  756339 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:42:36.300502  756339 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:42:36.463357  756339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:42:36.481587  756339 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:42:36.503793  756339 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:42:36.503865  756339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:42:36.518240  756339 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 19:42:36.518328  756339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:42:36.534240  756339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:42:36.549989  756339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:42:36.564796  756339 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:42:36.576378  756339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:42:36.589227  756339 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:42:36.601937  756339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:42:36.617000  756339 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:42:36.634630  756339 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:42:36.651756  756339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:42:36.802152  756339 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:42:36.964748  756339 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:42:36.964838  756339 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:42:36.969286  756339 start.go:563] Will wait 60s for crictl version
	I1017 19:42:36.969361  756339 ssh_runner.go:195] Run: which crictl
	I1017 19:42:36.974463  756339 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:42:37.002431  756339 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:42:37.002543  756339 ssh_runner.go:195] Run: crio --version
	I1017 19:42:37.037375  756339 ssh_runner.go:195] Run: crio --version
	I1017 19:42:37.083272  756339 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:42:34.995039  753072 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.412359946s
	I1017 19:42:35.702376  753072 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.119769675s
	I1017 19:42:37.088578  753072 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.503748905s
	I1017 19:42:37.101160  753072 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 19:42:37.117514  753072 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 19:42:37.137128  753072 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 19:42:37.137428  753072 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-438547 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 19:42:37.154365  753072 kubeadm.go:318] [bootstrap-token] Using token: 34r4tm.g0c4t44bh9jsukyf
	I1017 19:42:37.085090  756339 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-137244 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:42:37.108768  756339 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1017 19:42:37.114560  756339 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-137244 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-137244 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:42:37.114737  756339 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:42:37.114818  756339 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:42:37.168342  756339 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:42:37.168377  756339 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:42:37.168430  756339 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:42:37.204438  756339 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:42:37.204463  756339 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:42:37.204470  756339 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1017 19:42:37.204584  756339 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-137244 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-137244 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:42:37.204669  756339 ssh_runner.go:195] Run: crio config
	I1017 19:42:37.265142  756339 cni.go:84] Creating CNI manager for ""
	I1017 19:42:37.265176  756339 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:42:37.265205  756339 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:42:37.265238  756339 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-137244 NodeName:kubernetes-upgrade-137244 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:42:37.265374  756339 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-137244"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:42:37.265455  756339 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:42:37.274577  756339 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:42:37.274647  756339 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 19:42:37.283642  756339 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	
	
	==> CRI-O <==
	Oct 17 19:41:51 embed-certs-599709 crio[563]: time="2025-10-17T19:41:51.916509303Z" level=info msg="Started container" PID=1748 containerID=5b68e2e60da42953705cbec93f7d42eff4c4084f455b0009cb96a35e50ff851e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n/dashboard-metrics-scraper id=d5e1214a-d1fa-45c2-8e3f-13cd95addf6b name=/runtime.v1.RuntimeService/StartContainer sandboxID=0079e08b246389f561f22d2cd672b23163f6f935a5f77f44b2c7f93667c2e8ac
	Oct 17 19:41:52 embed-certs-599709 crio[563]: time="2025-10-17T19:41:52.862346997Z" level=info msg="Removing container: 6e732383138061518d6fab80051b5f2939e6d4d8e32b105d147db2f432edbe2e" id=99ceb06b-4afd-4a31-a4f1-1c842a453a28 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:41:52 embed-certs-599709 crio[563]: time="2025-10-17T19:41:52.874390887Z" level=info msg="Removed container 6e732383138061518d6fab80051b5f2939e6d4d8e32b105d147db2f432edbe2e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n/dashboard-metrics-scraper" id=99ceb06b-4afd-4a31-a4f1-1c842a453a28 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:42:05 embed-certs-599709 crio[563]: time="2025-10-17T19:42:05.772920813Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b483c6c0-be40-4d3d-b083-10db7ecb0303 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:42:05 embed-certs-599709 crio[563]: time="2025-10-17T19:42:05.774014755Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ae644ebe-088a-401d-8ade-f449860049fa name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:42:05 embed-certs-599709 crio[563]: time="2025-10-17T19:42:05.775127377Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n/dashboard-metrics-scraper" id=9b845a05-a524-4ba8-a5db-55a85d500c62 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:42:05 embed-certs-599709 crio[563]: time="2025-10-17T19:42:05.775433254Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:05 embed-certs-599709 crio[563]: time="2025-10-17T19:42:05.781786311Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:05 embed-certs-599709 crio[563]: time="2025-10-17T19:42:05.78250666Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:05 embed-certs-599709 crio[563]: time="2025-10-17T19:42:05.81894298Z" level=info msg="Created container 68f94505306836a4afa09de153f7145228fbb668039a0e3b489f7fe6b12a5b07: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n/dashboard-metrics-scraper" id=9b845a05-a524-4ba8-a5db-55a85d500c62 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:42:05 embed-certs-599709 crio[563]: time="2025-10-17T19:42:05.819767664Z" level=info msg="Starting container: 68f94505306836a4afa09de153f7145228fbb668039a0e3b489f7fe6b12a5b07" id=be9157be-f18d-444a-ba43-bdb3c8ebab8a name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:42:05 embed-certs-599709 crio[563]: time="2025-10-17T19:42:05.821887938Z" level=info msg="Started container" PID=1758 containerID=68f94505306836a4afa09de153f7145228fbb668039a0e3b489f7fe6b12a5b07 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n/dashboard-metrics-scraper id=be9157be-f18d-444a-ba43-bdb3c8ebab8a name=/runtime.v1.RuntimeService/StartContainer sandboxID=0079e08b246389f561f22d2cd672b23163f6f935a5f77f44b2c7f93667c2e8ac
	Oct 17 19:42:05 embed-certs-599709 crio[563]: time="2025-10-17T19:42:05.902407826Z" level=info msg="Removing container: 5b68e2e60da42953705cbec93f7d42eff4c4084f455b0009cb96a35e50ff851e" id=6b19a09f-d757-4106-a92f-b48c16b00adc name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:42:05 embed-certs-599709 crio[563]: time="2025-10-17T19:42:05.914511848Z" level=info msg="Removed container 5b68e2e60da42953705cbec93f7d42eff4c4084f455b0009cb96a35e50ff851e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n/dashboard-metrics-scraper" id=6b19a09f-d757-4106-a92f-b48c16b00adc name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:42:09 embed-certs-599709 crio[563]: time="2025-10-17T19:42:09.916577311Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=61354ab9-b58d-4104-b0f2-15b3a9bd543d name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:42:09 embed-certs-599709 crio[563]: time="2025-10-17T19:42:09.917799779Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b56311bd-54ab-422a-b4c9-e7406c29abdb name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:42:09 embed-certs-599709 crio[563]: time="2025-10-17T19:42:09.921302409Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=fdca4123-9f1f-430d-b7c9-be08f3cbcd2c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:42:09 embed-certs-599709 crio[563]: time="2025-10-17T19:42:09.921621575Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:09 embed-certs-599709 crio[563]: time="2025-10-17T19:42:09.927156633Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:09 embed-certs-599709 crio[563]: time="2025-10-17T19:42:09.927616408Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c917dfce4e5c09893cfbc1ce785adb2cc67c17c7fdb6a6c68fb0e8b5ca3f079c/merged/etc/passwd: no such file or directory"
	Oct 17 19:42:09 embed-certs-599709 crio[563]: time="2025-10-17T19:42:09.927765503Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c917dfce4e5c09893cfbc1ce785adb2cc67c17c7fdb6a6c68fb0e8b5ca3f079c/merged/etc/group: no such file or directory"
	Oct 17 19:42:09 embed-certs-599709 crio[563]: time="2025-10-17T19:42:09.928218436Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:09 embed-certs-599709 crio[563]: time="2025-10-17T19:42:09.968327073Z" level=info msg="Created container b1cf4808c3c06930bdd6510b062d88c619d479cded519a43398ca9bd108ed9a3: kube-system/storage-provisioner/storage-provisioner" id=fdca4123-9f1f-430d-b7c9-be08f3cbcd2c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:42:09 embed-certs-599709 crio[563]: time="2025-10-17T19:42:09.969315416Z" level=info msg="Starting container: b1cf4808c3c06930bdd6510b062d88c619d479cded519a43398ca9bd108ed9a3" id=3a1a0276-ea3a-4ae1-aae9-04dedefd9237 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:42:09 embed-certs-599709 crio[563]: time="2025-10-17T19:42:09.972256598Z" level=info msg="Started container" PID=1772 containerID=b1cf4808c3c06930bdd6510b062d88c619d479cded519a43398ca9bd108ed9a3 description=kube-system/storage-provisioner/storage-provisioner id=3a1a0276-ea3a-4ae1-aae9-04dedefd9237 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b5a222b88f349ee6aad4aac986c2daf779da4e56f90f7ffa212bdeb168972754
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	b1cf4808c3c06       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           28 seconds ago       Running             storage-provisioner         1                   b5a222b88f349       storage-provisioner                          kube-system
	68f9450530683       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago       Exited              dashboard-metrics-scraper   2                   0079e08b24638       dashboard-metrics-scraper-6ffb444bf9-xw42n   kubernetes-dashboard
	41a78ce660ab9       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   52 seconds ago       Running             kubernetes-dashboard        0                   02cefe766e6af       kubernetes-dashboard-855c9754f9-mh7df        kubernetes-dashboard
	c71792b79c961       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           59 seconds ago       Running             coredns                     0                   d57c56ca50297       coredns-66bc5c9577-v8hls                     kube-system
	724b28842e066       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           59 seconds ago       Running             busybox                     1                   96da245b5f256       busybox                                      default
	08a97449e7042       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           59 seconds ago       Exited              storage-provisioner         0                   b5a222b88f349       storage-provisioner                          kube-system
	86d44709917ef       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           59 seconds ago       Running             kube-proxy                  0                   928fdd095ceaf       kube-proxy-l2pwz                             kube-system
	0441540fc07f4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           59 seconds ago       Running             kindnet-cni                 0                   17fd20279adeb       kindnet-sj7sj                                kube-system
	9229cd3e223ec       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   09ba589c11933       kube-controller-manager-embed-certs-599709   kube-system
	eeadd287c3bf7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   b874ce0264ac9       kube-apiserver-embed-certs-599709            kube-system
	3320bb4791740       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   47558cae563fa       kube-scheduler-embed-certs-599709            kube-system
	eccf39ad86610       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   8424a0aef5e0d       etcd-embed-certs-599709                      kube-system
	
	
	==> coredns [c71792b79c961b845a6b99c5bcccfc46f9de23c2206aa747b29a31c68e849961] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49433 - 18909 "HINFO IN 4959494782317338407.7900292932436731498. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.083606372s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-599709
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-599709
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=embed-certs-599709
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_40_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:40:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-599709
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:42:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:42:09 +0000   Fri, 17 Oct 2025 19:40:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:42:09 +0000   Fri, 17 Oct 2025 19:40:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:42:09 +0000   Fri, 17 Oct 2025 19:40:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:42:09 +0000   Fri, 17 Oct 2025 19:40:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-599709
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                4ab96baf-e93c-4e34-b927-fdc987244361
	  Boot ID:                    c8616e78-d085-41cd-a329-f2bcfd9cfa15
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-v8hls                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-embed-certs-599709                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-sj7sj                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-embed-certs-599709             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-embed-certs-599709    200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-l2pwz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-embed-certs-599709             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-xw42n    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mh7df         0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  117s               kubelet          Node embed-certs-599709 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s               kubelet          Node embed-certs-599709 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s               kubelet          Node embed-certs-599709 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s               node-controller  Node embed-certs-599709 event: Registered Node embed-certs-599709 in Controller
	  Normal  NodeReady                100s               kubelet          Node embed-certs-599709 status is now: NodeReady
	  Normal  Starting                 63s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)  kubelet          Node embed-certs-599709 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)  kubelet          Node embed-certs-599709 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x8 over 63s)  kubelet          Node embed-certs-599709 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           57s                node-controller  Node embed-certs-599709 event: Registered Node embed-certs-599709 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 d1 49 91 03 c2 08 06
	[  +0.000804] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 a9 2b 44 da ae 08 06
	[Oct17 18:59] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022229] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023876] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.024898] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +2.047801] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +4.031525] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:00] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +16.382262] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +32.252567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	
	
	==> etcd [eccf39ad86610aefaf8eaf41939eb4ad09f3ebbd9c6afbe871000f0047c47987] <==
	{"level":"warn","ts":"2025-10-17T19:41:49.595360Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.751538ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-v8hls\" limit:1 ","response":"range_response_count:1 size:5934"}
	{"level":"info","ts":"2025-10-17T19:41:49.595395Z","caller":"traceutil/trace.go:172","msg":"trace[1816212144] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-v8hls; range_end:; response_count:1; response_revision:576; }","duration":"124.804913ms","start":"2025-10-17T19:41:49.470579Z","end":"2025-10-17T19:41:49.595384Z","steps":["trace[1816212144] 'agreement among raft nodes before linearized reading'  (duration: 124.63996ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:41:49.595442Z","caller":"traceutil/trace.go:172","msg":"trace[496287099] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"207.078087ms","start":"2025-10-17T19:41:49.388341Z","end":"2025-10-17T19:41:49.595419Z","steps":["trace[496287099] 'process raft request'  (duration: 206.86249ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:41:49.717418Z","caller":"traceutil/trace.go:172","msg":"trace[1805226362] linearizableReadLoop","detail":"{readStateIndex:605; appliedIndex:605; }","duration":"118.586444ms","start":"2025-10-17T19:41:49.598804Z","end":"2025-10-17T19:41:49.717391Z","steps":["trace[1805226362] 'read index received'  (duration: 118.578335ms)","trace[1805226362] 'applied index is now lower than readState.Index'  (duration: 6.879µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T19:41:49.920164Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"321.332724ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-599709\" limit:1 ","response":"range_response_count:1 size:5685"}
	{"level":"warn","ts":"2025-10-17T19:41:49.920179Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"321.357598ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-599709\" limit:1 ","response":"range_response_count:1 size:8201"}
	{"level":"info","ts":"2025-10-17T19:41:49.920223Z","caller":"traceutil/trace.go:172","msg":"trace[249947990] range","detail":"{range_begin:/registry/minions/embed-certs-599709; range_end:; response_count:1; response_revision:577; }","duration":"321.409167ms","start":"2025-10-17T19:41:49.598802Z","end":"2025-10-17T19:41:49.920211Z","steps":["trace[249947990] 'agreement among raft nodes before linearized reading'  (duration: 118.640276ms)","trace[249947990] 'range keys from in-memory index tree'  (duration: 202.586626ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T19:41:49.920222Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"202.744105ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765553191643303 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:5b3399f3b122e8a6>","response":"size:41"}
	{"level":"info","ts":"2025-10-17T19:41:49.920247Z","caller":"traceutil/trace.go:172","msg":"trace[652169401] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-embed-certs-599709; range_end:; response_count:1; response_revision:577; }","duration":"321.425216ms","start":"2025-10-17T19:41:49.598795Z","end":"2025-10-17T19:41:49.920220Z","steps":["trace[652169401] 'agreement among raft nodes before linearized reading'  (duration: 118.713128ms)","trace[652169401] 'range keys from in-memory index tree'  (duration: 202.552724ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T19:41:49.920277Z","caller":"traceutil/trace.go:172","msg":"trace[2103183253] linearizableReadLoop","detail":"{readStateIndex:606; appliedIndex:605; }","duration":"202.787348ms","start":"2025-10-17T19:41:49.717482Z","end":"2025-10-17T19:41:49.920269Z","steps":["trace[2103183253] 'read index received'  (duration: 37.298µs)","trace[2103183253] 'applied index is now lower than readState.Index'  (duration: 202.749374ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T19:41:49.920257Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-17T19:41:49.598784Z","time spent":"321.464586ms","remote":"127.0.0.1:54464","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":5709,"request content":"key:\"/registry/minions/embed-certs-599709\" limit:1 "}
	{"level":"warn","ts":"2025-10-17T19:41:49.920307Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-17T19:41:49.598779Z","time spent":"321.491697ms","remote":"127.0.0.1:54492","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":1,"response size":8225,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-599709\" limit:1 "}
	{"level":"warn","ts":"2025-10-17T19:41:49.920321Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"318.35049ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T19:41:49.920342Z","caller":"traceutil/trace.go:172","msg":"trace[1576917677] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:577; }","duration":"318.373756ms","start":"2025-10-17T19:41:49.601964Z","end":"2025-10-17T19:41:49.920338Z","steps":["trace[1576917677] 'agreement among raft nodes before linearized reading'  (duration: 318.331253ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:41:49.920355Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-17T19:41:49.601949Z","time spent":"318.403237ms","remote":"127.0.0.1:54176","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-10-17T19:41:49.920302Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-17T19:41:49.596299Z","time spent":"324.000197ms","remote":"127.0.0.1:54230","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2025-10-17T19:41:50.046471Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.79366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" limit:1 ","response":"range_response_count:1 size:420"}
	{"level":"info","ts":"2025-10-17T19:41:50.046529Z","caller":"traceutil/trace.go:172","msg":"trace[379849208] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:578; }","duration":"100.870602ms","start":"2025-10-17T19:41:49.945647Z","end":"2025-10-17T19:41:50.046518Z","steps":["trace[379849208] 'agreement among raft nodes before linearized reading'  (duration: 88.495738ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:41:50.046567Z","caller":"traceutil/trace.go:172","msg":"trace[1264162776] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"116.945751ms","start":"2025-10-17T19:41:49.929596Z","end":"2025-10-17T19:41:50.046542Z","steps":["trace[1264162776] 'process raft request'  (duration: 104.621672ms)","trace[1264162776] 'compare'  (duration: 12.14726ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T19:41:50.271672Z","caller":"traceutil/trace.go:172","msg":"trace[1439312170] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"171.199748ms","start":"2025-10-17T19:41:50.100450Z","end":"2025-10-17T19:41:50.271649Z","steps":["trace[1439312170] 'process raft request'  (duration: 145.884315ms)","trace[1439312170] 'compare'  (duration: 25.197994ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T19:42:19.010527Z","caller":"traceutil/trace.go:172","msg":"trace[1578569325] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"137.301399ms","start":"2025-10-17T19:42:18.873200Z","end":"2025-10-17T19:42:19.010502Z","steps":["trace[1578569325] 'process raft request'  (duration: 137.179644ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:19.208847Z","caller":"traceutil/trace.go:172","msg":"trace[1045870536] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"193.575977ms","start":"2025-10-17T19:42:19.015245Z","end":"2025-10-17T19:42:19.208821Z","steps":["trace[1045870536] 'process raft request'  (duration: 120.896375ms)","trace[1045870536] 'compare'  (duration: 72.565122ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T19:42:19.209415Z","caller":"traceutil/trace.go:172","msg":"trace[1702464014] transaction","detail":"{read_only:false; response_revision:623; number_of_response:1; }","duration":"194.111791ms","start":"2025-10-17T19:42:19.015285Z","end":"2025-10-17T19:42:19.209397Z","steps":["trace[1702464014] 'process raft request'  (duration: 193.976769ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:19.209590Z","caller":"traceutil/trace.go:172","msg":"trace[499014038] transaction","detail":"{read_only:false; response_revision:624; number_of_response:1; }","duration":"192.099308ms","start":"2025-10-17T19:42:19.017478Z","end":"2025-10-17T19:42:19.209577Z","steps":["trace[499014038] 'process raft request'  (duration: 191.880626ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:19.463016Z","caller":"traceutil/trace.go:172","msg":"trace[828346930] transaction","detail":"{read_only:false; response_revision:627; number_of_response:1; }","duration":"101.399638ms","start":"2025-10-17T19:42:19.361599Z","end":"2025-10-17T19:42:19.462999Z","steps":["trace[828346930] 'process raft request'  (duration: 64.432521ms)","trace[828346930] 'compare'  (duration: 36.860103ms)"],"step_count":2}
	
	
	==> kernel <==
	 19:42:38 up  3:24,  0 user,  load average: 3.90, 3.40, 2.19
	Linux embed-certs-599709 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0441540fc07f4acf1b274f1e141b3f57fb47af829295b17c4515e4481635ddec] <==
	I1017 19:41:39.305331       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 19:41:39.305665       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1017 19:41:39.305876       1 main.go:148] setting mtu 1500 for CNI 
	I1017 19:41:39.305895       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 19:41:39.305915       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T19:41:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 19:41:39.603569       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 19:41:39.603615       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 19:41:39.603630       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:41:39.603823       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 19:41:40.104152       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:41:40.104195       1 metrics.go:72] Registering metrics
	I1017 19:41:40.104266       1 controller.go:711] "Syncing nftables rules"
	I1017 19:41:49.603955       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 19:41:49.603998       1 main.go:301] handling current node
	I1017 19:41:59.606815       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 19:41:59.606857       1 main.go:301] handling current node
	I1017 19:42:09.603855       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 19:42:09.603888       1 main.go:301] handling current node
	I1017 19:42:19.603586       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 19:42:19.603620       1 main.go:301] handling current node
	I1017 19:42:29.611770       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 19:42:29.611808       1 main.go:301] handling current node
	
	
	==> kube-apiserver [eeadd287c3bf74a34717467fb1adfa03126b04b4a20a9dd1ecd6ef8e5fa4c43a] <==
	I1017 19:41:38.384891       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 19:41:38.385067       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 19:41:38.384750       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1017 19:41:38.385268       1 aggregator.go:171] initial CRD sync complete...
	I1017 19:41:38.385278       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 19:41:38.385286       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 19:41:38.385292       1 cache.go:39] Caches are synced for autoregister controller
	I1017 19:41:38.385479       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 19:41:38.385521       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 19:41:38.385534       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 19:41:38.385522       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 19:41:38.393311       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 19:41:38.408733       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:41:38.426107       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 19:41:38.680986       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 19:41:38.712353       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 19:41:38.732837       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 19:41:38.741456       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 19:41:38.753963       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 19:41:38.800603       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.218.14"}
	I1017 19:41:38.814418       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.64.45"}
	I1017 19:41:39.286991       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:41:42.114327       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:41:42.163995       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 19:41:42.264708       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [9229cd3e223ec817b5885265f0c88a1b78735a34ba5f6a4b4723d3fee1cf4d34] <==
	I1017 19:41:41.710355       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 19:41:41.710291       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 19:41:41.710308       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 19:41:41.710484       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 19:41:41.710567       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 19:41:41.710573       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 19:41:41.710589       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 19:41:41.710724       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 19:41:41.711757       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 19:41:41.713735       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 19:41:41.714929       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:41:41.730571       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 19:41:41.730572       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:41:41.730647       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 19:41:41.730717       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 19:41:41.730730       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 19:41:41.730739       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 19:41:41.733809       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 19:41:41.736088       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 19:41:41.738439       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 19:41:41.738564       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 19:41:41.738712       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-599709"
	I1017 19:41:41.738819       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 19:41:41.744946       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 19:41:41.747207       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [86d44709917efa4ea7da20b25e01fb82cc2d82f4b052c55a4b04e72bf8d2ac0d] <==
	I1017 19:41:39.174092       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:41:39.228868       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:41:39.329101       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:41:39.329143       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1017 19:41:39.329257       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:41:39.352672       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:41:39.352773       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:41:39.359582       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:41:39.360093       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:41:39.360127       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:41:39.362272       1 config.go:309] "Starting node config controller"
	I1017 19:41:39.362307       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:41:39.362317       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:41:39.362272       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:41:39.362325       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:41:39.362697       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:41:39.362714       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:41:39.362864       1 config.go:200] "Starting service config controller"
	I1017 19:41:39.362941       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:41:39.462769       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:41:39.463866       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 19:41:39.463946       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [3320bb4791740d09b759229a773dc3c8b5f46f29bca00968f79441653fafafce] <==
	I1017 19:41:38.330656       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:41:38.335597       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:41:38.335727       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1017 19:41:38.339194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1017 19:41:38.339351       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 19:41:38.340092       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1017 19:41:38.351084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 19:41:38.351966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 19:41:38.352069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 19:41:38.352347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:41:38.352470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:41:38.352774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:41:38.352902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:41:38.353015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:41:38.353198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 19:41:38.353316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 19:41:38.353442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 19:41:38.353581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 19:41:38.353679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:41:38.353870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:41:38.353983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:41:38.354311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:41:38.354389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 19:41:38.354457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1017 19:41:39.636278       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:41:42 embed-certs-599709 kubelet[721]: I1017 19:41:42.438988     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/57cea4eb-4449-4f85-a911-073e40686fda-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-xw42n\" (UID: \"57cea4eb-4449-4f85-a911-073e40686fda\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n"
	Oct 17 19:41:42 embed-certs-599709 kubelet[721]: I1017 19:41:42.439027     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k74pq\" (UniqueName: \"kubernetes.io/projected/548ef298-e15a-4b09-831b-288b15fb3a90-kube-api-access-k74pq\") pod \"kubernetes-dashboard-855c9754f9-mh7df\" (UID: \"548ef298-e15a-4b09-831b-288b15fb3a90\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mh7df"
	Oct 17 19:41:48 embed-certs-599709 kubelet[721]: I1017 19:41:48.857078     721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 17 19:41:49 embed-certs-599709 kubelet[721]: I1017 19:41:49.597034     721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mh7df" podStartSLOduration=4.057737185 podStartE2EDuration="7.597011826s" podCreationTimestamp="2025-10-17 19:41:42 +0000 UTC" firstStartedPulling="2025-10-17 19:41:42.673439565 +0000 UTC m=+7.009794799" lastFinishedPulling="2025-10-17 19:41:46.212714191 +0000 UTC m=+10.549069440" observedRunningTime="2025-10-17 19:41:46.856337186 +0000 UTC m=+11.192692442" watchObservedRunningTime="2025-10-17 19:41:49.597011826 +0000 UTC m=+13.933367082"
	Oct 17 19:41:51 embed-certs-599709 kubelet[721]: I1017 19:41:51.855648     721 scope.go:117] "RemoveContainer" containerID="6e732383138061518d6fab80051b5f2939e6d4d8e32b105d147db2f432edbe2e"
	Oct 17 19:41:52 embed-certs-599709 kubelet[721]: I1017 19:41:52.860845     721 scope.go:117] "RemoveContainer" containerID="6e732383138061518d6fab80051b5f2939e6d4d8e32b105d147db2f432edbe2e"
	Oct 17 19:41:52 embed-certs-599709 kubelet[721]: I1017 19:41:52.860974     721 scope.go:117] "RemoveContainer" containerID="5b68e2e60da42953705cbec93f7d42eff4c4084f455b0009cb96a35e50ff851e"
	Oct 17 19:41:52 embed-certs-599709 kubelet[721]: E1017 19:41:52.861168     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xw42n_kubernetes-dashboard(57cea4eb-4449-4f85-a911-073e40686fda)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n" podUID="57cea4eb-4449-4f85-a911-073e40686fda"
	Oct 17 19:41:53 embed-certs-599709 kubelet[721]: I1017 19:41:53.866497     721 scope.go:117] "RemoveContainer" containerID="5b68e2e60da42953705cbec93f7d42eff4c4084f455b0009cb96a35e50ff851e"
	Oct 17 19:41:53 embed-certs-599709 kubelet[721]: E1017 19:41:53.866752     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xw42n_kubernetes-dashboard(57cea4eb-4449-4f85-a911-073e40686fda)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n" podUID="57cea4eb-4449-4f85-a911-073e40686fda"
	Oct 17 19:41:54 embed-certs-599709 kubelet[721]: I1017 19:41:54.869362     721 scope.go:117] "RemoveContainer" containerID="5b68e2e60da42953705cbec93f7d42eff4c4084f455b0009cb96a35e50ff851e"
	Oct 17 19:41:54 embed-certs-599709 kubelet[721]: E1017 19:41:54.869556     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xw42n_kubernetes-dashboard(57cea4eb-4449-4f85-a911-073e40686fda)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n" podUID="57cea4eb-4449-4f85-a911-073e40686fda"
	Oct 17 19:42:05 embed-certs-599709 kubelet[721]: I1017 19:42:05.772458     721 scope.go:117] "RemoveContainer" containerID="5b68e2e60da42953705cbec93f7d42eff4c4084f455b0009cb96a35e50ff851e"
	Oct 17 19:42:05 embed-certs-599709 kubelet[721]: I1017 19:42:05.900943     721 scope.go:117] "RemoveContainer" containerID="5b68e2e60da42953705cbec93f7d42eff4c4084f455b0009cb96a35e50ff851e"
	Oct 17 19:42:05 embed-certs-599709 kubelet[721]: I1017 19:42:05.901167     721 scope.go:117] "RemoveContainer" containerID="68f94505306836a4afa09de153f7145228fbb668039a0e3b489f7fe6b12a5b07"
	Oct 17 19:42:05 embed-certs-599709 kubelet[721]: E1017 19:42:05.901396     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xw42n_kubernetes-dashboard(57cea4eb-4449-4f85-a911-073e40686fda)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n" podUID="57cea4eb-4449-4f85-a911-073e40686fda"
	Oct 17 19:42:09 embed-certs-599709 kubelet[721]: I1017 19:42:09.915309     721 scope.go:117] "RemoveContainer" containerID="08a97449e70420f352437d2e7b662ce49460b732cd9f801bfcc38ab73978576f"
	Oct 17 19:42:13 embed-certs-599709 kubelet[721]: I1017 19:42:13.744501     721 scope.go:117] "RemoveContainer" containerID="68f94505306836a4afa09de153f7145228fbb668039a0e3b489f7fe6b12a5b07"
	Oct 17 19:42:13 embed-certs-599709 kubelet[721]: E1017 19:42:13.744747     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xw42n_kubernetes-dashboard(57cea4eb-4449-4f85-a911-073e40686fda)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n" podUID="57cea4eb-4449-4f85-a911-073e40686fda"
	Oct 17 19:42:25 embed-certs-599709 kubelet[721]: I1017 19:42:25.772427     721 scope.go:117] "RemoveContainer" containerID="68f94505306836a4afa09de153f7145228fbb668039a0e3b489f7fe6b12a5b07"
	Oct 17 19:42:25 embed-certs-599709 kubelet[721]: E1017 19:42:25.772697     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xw42n_kubernetes-dashboard(57cea4eb-4449-4f85-a911-073e40686fda)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xw42n" podUID="57cea4eb-4449-4f85-a911-073e40686fda"
	Oct 17 19:42:33 embed-certs-599709 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 19:42:33 embed-certs-599709 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 19:42:33 embed-certs-599709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 17 19:42:33 embed-certs-599709 systemd[1]: kubelet.service: Consumed 1.904s CPU time.
	
	
	==> kubernetes-dashboard [41a78ce660ab9948eef365dc04305d4032e755474c32025ba6c3e26c56f866ca] <==
	2025/10/17 19:41:46 Using namespace: kubernetes-dashboard
	2025/10/17 19:41:46 Using in-cluster config to connect to apiserver
	2025/10/17 19:41:46 Using secret token for csrf signing
	2025/10/17 19:41:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 19:41:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 19:41:46 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 19:41:46 Generating JWE encryption key
	2025/10/17 19:41:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 19:41:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 19:41:46 Initializing JWE encryption key from synchronized object
	2025/10/17 19:41:46 Creating in-cluster Sidecar client
	2025/10/17 19:41:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 19:41:46 Serving insecurely on HTTP port: 9090
	2025/10/17 19:42:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 19:41:46 Starting overwatch
	
	
	==> storage-provisioner [08a97449e70420f352437d2e7b662ce49460b732cd9f801bfcc38ab73978576f] <==
	I1017 19:41:39.141903       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 19:42:09.146139       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b1cf4808c3c06930bdd6510b062d88c619d479cded519a43398ca9bd108ed9a3] <==
	I1017 19:42:09.998789       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 19:42:09.998857       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 19:42:10.004599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:13.460571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:17.720889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:21.319998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:24.374132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:27.397296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:27.402917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 19:42:27.403110       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 19:42:27.403309       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-599709_9be73794-60ef-46e8-b1d6-e500d86aaa0c!
	I1017 19:42:27.403609       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa1d4ee3-e6c8-4c5a-aa5e-ad86f7d4d22b", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-599709_9be73794-60ef-46e8-b1d6-e500d86aaa0c became leader
	W1017 19:42:27.406635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:27.412096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 19:42:27.503494       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-599709_9be73794-60ef-46e8-b1d6-e500d86aaa0c!
	W1017 19:42:29.417151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:29.423193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:31.427560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:31.434062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:33.437245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:33.441675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:35.445994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:35.451796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:37.455461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:37.461145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-599709 -n embed-certs-599709
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-599709 -n embed-certs-599709: exit status 2 (351.877924ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-599709 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-112878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-112878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (290.259858ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:42:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-112878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-112878 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-112878 describe deploy/metrics-server -n kube-system: exit status 1 (89.241464ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-112878 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-112878
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-112878:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8097e3bd54ba555448dec314d1787c2226a13079be774ea5f55f5529ed22938b",
	        "Created": "2025-10-17T19:41:51.17407631Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 747321,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:41:51.224955246Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/8097e3bd54ba555448dec314d1787c2226a13079be774ea5f55f5529ed22938b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8097e3bd54ba555448dec314d1787c2226a13079be774ea5f55f5529ed22938b/hostname",
	        "HostsPath": "/var/lib/docker/containers/8097e3bd54ba555448dec314d1787c2226a13079be774ea5f55f5529ed22938b/hosts",
	        "LogPath": "/var/lib/docker/containers/8097e3bd54ba555448dec314d1787c2226a13079be774ea5f55f5529ed22938b/8097e3bd54ba555448dec314d1787c2226a13079be774ea5f55f5529ed22938b-json.log",
	        "Name": "/default-k8s-diff-port-112878",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-112878:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-112878",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8097e3bd54ba555448dec314d1787c2226a13079be774ea5f55f5529ed22938b",
	                "LowerDir": "/var/lib/docker/overlay2/cff612883004ab32fa13a69dfcba6214c9fdd98d230080eef796d074286be5a8-init/diff:/var/lib/docker/overlay2/dbfb6a42e05d15debefb7c829b0dbabbe558b70da40f1ab4f30d27e0dda96088/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cff612883004ab32fa13a69dfcba6214c9fdd98d230080eef796d074286be5a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cff612883004ab32fa13a69dfcba6214c9fdd98d230080eef796d074286be5a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cff612883004ab32fa13a69dfcba6214c9fdd98d230080eef796d074286be5a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-112878",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-112878/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-112878",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-112878",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-112878",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3cc68cf6aa4c2f3627776bdf6ab81a772dded3e85c54749eabb9d36ba1fb1142",
	            "SandboxKey": "/var/run/docker/netns/3cc68cf6aa4c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-112878": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:03:6e:74:ea:37",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "120e96e993d1ce75e5b49ee5a2ece0f97836f04b7e2bb3daf297bdcc6e4a8079",
	                    "EndpointID": "73e463d0fa38233df681b0843d5c0369565bfd8c824251c6f2f5c6493faadc82",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-112878",
	                        "8097e3bd54ba"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-112878 -n default-k8s-diff-port-112878
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-112878 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-112878 logs -n 25: (1.346984395s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-907112 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable metrics-server -p no-preload-171807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │                     │
	│ stop    │ -p no-preload-171807 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:40 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p no-preload-171807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-171807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-599709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	│ stop    │ -p embed-certs-599709 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p embed-certs-599709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ start   │ -p embed-certs-599709 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:42 UTC │
	│ image   │ old-k8s-version-907112 image list --format=json                                                                                                                                                                                               │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ pause   │ -p old-k8s-version-907112 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	│ delete  │ -p old-k8s-version-907112                                                                                                                                                                                                                     │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-907112                                                                                                                                                                                                                     │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ delete  │ -p disable-driver-mounts-220565                                                                                                                                                                                                               │ disable-driver-mounts-220565 │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ start   │ -p default-k8s-diff-port-112878 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:42 UTC │
	│ image   │ no-preload-171807 image list --format=json                                                                                                                                                                                                    │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ pause   │ -p no-preload-171807 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ delete  │ -p no-preload-171807                                                                                                                                                                                                                          │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ delete  │ -p no-preload-171807                                                                                                                                                                                                                          │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ start   │ -p newest-cni-438547 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438547            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ start   │ -p kubernetes-upgrade-137244 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-137244    │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ start   │ -p kubernetes-upgrade-137244 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-137244    │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ image   │ embed-certs-599709 image list --format=json                                                                                                                                                                                                   │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ pause   │ -p embed-certs-599709 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-112878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:42:32
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:42:32.284642  756339 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:42:32.284938  756339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:42:32.284948  756339 out.go:374] Setting ErrFile to fd 2...
	I1017 19:42:32.284952  756339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:42:32.285167  756339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:42:32.285627  756339 out.go:368] Setting JSON to false
	I1017 19:42:32.286955  756339 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12291,"bootTime":1760717861,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:42:32.287079  756339 start.go:141] virtualization: kvm guest
	I1017 19:42:32.288866  756339 out.go:179] * [kubernetes-upgrade-137244] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:42:32.290361  756339 notify.go:220] Checking for updates...
	I1017 19:42:32.290381  756339 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:42:32.291717  756339 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:42:32.293887  756339 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:42:32.295817  756339 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 19:42:32.297786  756339 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:42:32.299504  756339 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:42:32.302054  756339 config.go:182] Loaded profile config "kubernetes-upgrade-137244": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:42:32.302866  756339 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:42:32.330847  756339 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:42:32.330958  756339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:42:32.404999  756339 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-17 19:42:32.393716398 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:42:32.405120  756339 docker.go:318] overlay module found
	I1017 19:42:32.407813  756339 out.go:179] * Using the docker driver based on existing profile
	I1017 19:42:32.409163  756339 start.go:305] selected driver: docker
	I1017 19:42:32.409186  756339 start.go:925] validating driver "docker" against &{Name:kubernetes-upgrade-137244 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-137244 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:42:32.409310  756339 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:42:32.410021  756339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:42:32.478656  756339 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-17 19:42:32.466730101 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:42:32.479162  756339 cni.go:84] Creating CNI manager for ""
	I1017 19:42:32.479244  756339 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:42:32.479327  756339 start.go:349] cluster config:
	{Name:kubernetes-upgrade-137244 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-137244 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:42:32.481696  756339 out.go:179] * Starting "kubernetes-upgrade-137244" primary control-plane node in "kubernetes-upgrade-137244" cluster
	I1017 19:42:32.483108  756339 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:42:32.484546  756339 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:42:32.485879  756339 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:42:32.485924  756339 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:42:32.485945  756339 cache.go:58] Caching tarball of preloaded images
	I1017 19:42:32.485996  756339 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:42:32.486050  756339 preload.go:233] Found /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:42:32.486066  756339 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:42:32.486222  756339 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/kubernetes-upgrade-137244/config.json ...
	I1017 19:42:32.511005  756339 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:42:32.511029  756339 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:42:32.511049  756339 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:42:32.511082  756339 start.go:360] acquireMachinesLock for kubernetes-upgrade-137244: {Name:mk295f0c37c369f712e9c8f3857f62f6297f3f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:42:32.511151  756339 start.go:364] duration metric: took 47.722µs to acquireMachinesLock for "kubernetes-upgrade-137244"
	I1017 19:42:32.511179  756339 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:42:32.511186  756339 fix.go:54] fixHost starting: 
	I1017 19:42:32.511477  756339 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-137244 --format={{.State.Status}}
	I1017 19:42:32.531731  756339 fix.go:112] recreateIfNeeded on kubernetes-upgrade-137244: state=Running err=<nil>
	W1017 19:42:32.531776  756339 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:42:30.943588  753072 out.go:252]   - Booting up control plane ...
	I1017 19:42:30.943749  753072 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 19:42:30.943877  753072 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 19:42:30.943979  753072 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 19:42:30.958565  753072 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 19:42:30.958772  753072 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 19:42:30.965606  753072 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 19:42:30.965883  753072 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 19:42:30.965992  753072 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 19:42:31.077801  753072 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 19:42:31.077992  753072 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 19:42:32.579046  753072 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501467006s
	I1017 19:42:32.582509  753072 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 19:42:32.582645  753072 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1017 19:42:32.582797  753072 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 19:42:32.582906  753072 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Oct 17 19:42:23 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:23.667535149Z" level=info msg="Starting container: 381d4c8ae5acfb5b31ca5274a9b8298302911bbf49c22560b02ee7dfd4e9c4c0" id=0ae84941-7c59-4dfb-bcb1-e812aa43d7ea name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:42:23 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:23.669704921Z" level=info msg="Started container" PID=1852 containerID=381d4c8ae5acfb5b31ca5274a9b8298302911bbf49c22560b02ee7dfd4e9c4c0 description=kube-system/coredns-66bc5c9577-vckxk/coredns id=0ae84941-7c59-4dfb-bcb1-e812aa43d7ea name=/runtime.v1.RuntimeService/StartContainer sandboxID=88140b23e2f76d0dbbadd5f3b650914676b1611e58e7ab78b80500a3b24d501e
	Oct 17 19:42:26 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:26.22495748Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c9cd3800-36bf-44e4-8ad4-ef39f96e813a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:42:26 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:26.225099889Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:26 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:26.231949355Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:dc263f7ab548fd3b8a667dc6009da283301c3948730c64dfbdc913454bfb7f56 UID:a8098647-3058-4af7-ab8b-7ecb428988e6 NetNS:/var/run/netns/9f24091c-6f74-49b4-bfcc-80c7f775cd8f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000bb0310}] Aliases:map[]}"
	Oct 17 19:42:26 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:26.231994852Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 17 19:42:26 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:26.245990074Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:dc263f7ab548fd3b8a667dc6009da283301c3948730c64dfbdc913454bfb7f56 UID:a8098647-3058-4af7-ab8b-7ecb428988e6 NetNS:/var/run/netns/9f24091c-6f74-49b4-bfcc-80c7f775cd8f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000bb0310}] Aliases:map[]}"
	Oct 17 19:42:26 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:26.246235622Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 17 19:42:26 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:26.247183858Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 19:42:26 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:26.248443922Z" level=info msg="Ran pod sandbox dc263f7ab548fd3b8a667dc6009da283301c3948730c64dfbdc913454bfb7f56 with infra container: default/busybox/POD" id=c9cd3800-36bf-44e4-8ad4-ef39f96e813a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:42:26 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:26.249872367Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=84797619-e4a9-4479-8a0b-4a3108d0f1ec name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:42:26 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:26.250011857Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=84797619-e4a9-4479-8a0b-4a3108d0f1ec name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:42:26 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:26.250064963Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=84797619-e4a9-4479-8a0b-4a3108d0f1ec name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:42:26 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:26.250999141Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5b7a0eb4-ff3e-4992-8e2e-944a9df8cdf6 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:42:26 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:26.255062443Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 17 19:42:27 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:27.035464147Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=5b7a0eb4-ff3e-4992-8e2e-944a9df8cdf6 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:42:27 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:27.036666667Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=44fb600f-db73-4d29-87b5-f80c333ac675 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:42:27 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:27.038820961Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e1510c6b-9059-42f5-8446-b932e947eb13 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:42:27 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:27.04352409Z" level=info msg="Creating container: default/busybox/busybox" id=58bd38f3-9db0-449b-bd39-05f0dc637416 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:42:27 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:27.044437182Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:27 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:27.049181256Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:27 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:27.049814167Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:27 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:27.084115841Z" level=info msg="Created container 7db6213c8db6717d21bf6200748a3c389ab5b987290d5926aa54765852d9d786: default/busybox/busybox" id=58bd38f3-9db0-449b-bd39-05f0dc637416 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:42:27 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:27.085341658Z" level=info msg="Starting container: 7db6213c8db6717d21bf6200748a3c389ab5b987290d5926aa54765852d9d786" id=f6ecc6c6-7ebc-4e1a-bfe1-df955edefb9e name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:42:27 default-k8s-diff-port-112878 crio[780]: time="2025-10-17T19:42:27.087669252Z" level=info msg="Started container" PID=1922 containerID=7db6213c8db6717d21bf6200748a3c389ab5b987290d5926aa54765852d9d786 description=default/busybox/busybox id=f6ecc6c6-7ebc-4e1a-bfe1-df955edefb9e name=/runtime.v1.RuntimeService/StartContainer sandboxID=dc263f7ab548fd3b8a667dc6009da283301c3948730c64dfbdc913454bfb7f56
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	7db6213c8db67       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   dc263f7ab548f       busybox                                                default
	381d4c8ae5acf       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   88140b23e2f76       coredns-66bc5c9577-vckxk                               kube-system
	797ffb0028876       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   e9be0a0665268       storage-provisioner                                    kube-system
	cbbe41f4f6bc6       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   b32a3dfb4a193       kube-proxy-d2jpw                                       kube-system
	1120600a3f8d8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   04850b353a4ce       kindnet-xvc9b                                          kube-system
	4d79898da765b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   a6e2094cdccef       kube-scheduler-default-k8s-diff-port-112878            kube-system
	e465f579a086e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   19afbd3ff8bd7       kube-controller-manager-default-k8s-diff-port-112878   kube-system
	a1683aec5f014       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   7eee66aaa00c8       etcd-default-k8s-diff-port-112878                      kube-system
	2532abf4c7447       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   b4d4f9d9eb766       kube-apiserver-default-k8s-diff-port-112878            kube-system
	
	
	==> coredns [381d4c8ae5acfb5b31ca5274a9b8298302911bbf49c22560b02ee7dfd4e9c4c0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40714 - 32584 "HINFO IN 1097418671839470099.96814039584928665. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.055526185s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-112878
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-112878
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=default-k8s-diff-port-112878
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_42_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:42:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-112878
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:42:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:42:23 +0000   Fri, 17 Oct 2025 19:42:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:42:23 +0000   Fri, 17 Oct 2025 19:42:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:42:23 +0000   Fri, 17 Oct 2025 19:42:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:42:23 +0000   Fri, 17 Oct 2025 19:42:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-112878
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                d9945229-8d2c-480b-8ce0-8f084b03705d
	  Boot ID:                    c8616e78-d085-41cd-a329-f2bcfd9cfa15
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-vckxk                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-default-k8s-diff-port-112878                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-xvc9b                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-default-k8s-diff-port-112878             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-112878    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-d2jpw                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-default-k8s-diff-port-112878             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node default-k8s-diff-port-112878 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node default-k8s-diff-port-112878 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node default-k8s-diff-port-112878 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s   node-controller  Node default-k8s-diff-port-112878 event: Registered Node default-k8s-diff-port-112878 in Controller
	  Normal  NodeReady                12s   kubelet          Node default-k8s-diff-port-112878 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 d1 49 91 03 c2 08 06
	[  +0.000804] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 a9 2b 44 da ae 08 06
	[Oct17 18:59] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022229] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023876] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.024898] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +2.047801] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +4.031525] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:00] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +16.382262] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +32.252567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	
	
	==> etcd [a1683aec5f01424c25c237fff58961f7827402f5f9deecf6e71843aacf5bfec2] <==
	{"level":"warn","ts":"2025-10-17T19:42:03.618404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.627130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.636900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.645027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.652962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.661051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.668456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.676522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.684013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.692612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.699423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.707958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.715346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.722172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.729069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.746874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.754464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.762756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.769603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.777372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.784322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.800895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.809373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.816048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:42:03.865396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54342","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:42:35 up  3:24,  0 user,  load average: 3.90, 3.40, 2.19
	Linux default-k8s-diff-port-112878 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1120600a3f8d80d4ff7b79650e89688f9ae7efe75277eec19807c4650396ae49] <==
	I1017 19:42:12.908835       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 19:42:12.909173       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 19:42:12.909344       1 main.go:148] setting mtu 1500 for CNI 
	I1017 19:42:12.909361       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 19:42:12.909372       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T19:42:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 19:42:13.110796       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 19:42:13.110948       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 19:42:13.110971       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:42:13.111166       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 19:42:13.511156       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:42:13.511184       1 metrics.go:72] Registering metrics
	I1017 19:42:13.511243       1 controller.go:711] "Syncing nftables rules"
	I1017 19:42:23.084790       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:42:23.084873       1 main.go:301] handling current node
	I1017 19:42:33.084813       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:42:33.084876       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2532abf4c744730fda6a84c0bce5398c7202e598a2d8d0d8f60e350d0a32b3e2] <==
	I1017 19:42:04.402845       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1017 19:42:04.407269       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:42:04.407315       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1017 19:42:04.413809       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:42:04.414060       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 19:42:04.593941       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 19:42:05.306249       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 19:42:05.313570       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 19:42:05.313593       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:42:05.898000       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 19:42:05.942662       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 19:42:06.009166       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 19:42:06.016394       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1017 19:42:06.017608       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:42:06.023255       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 19:42:06.320058       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 19:42:06.876946       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 19:42:06.886557       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 19:42:06.896859       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 19:42:12.123351       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1017 19:42:12.123407       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1017 19:42:12.324711       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:42:12.329876       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:42:12.423589       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1017 19:42:34.049848       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:38706: use of closed network connection
	
	
	==> kube-controller-manager [e465f579a086ef75aa61e91258376c53546adb7729aa140b69abb340ceb7e2ce] <==
	I1017 19:42:11.321226       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 19:42:11.321250       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1017 19:42:11.321254       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 19:42:11.321344       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 19:42:11.321225       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 19:42:11.321365       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 19:42:11.321343       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 19:42:11.321445       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 19:42:11.321472       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 19:42:11.321499       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 19:42:11.324754       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 19:42:11.324784       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 19:42:11.324838       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 19:42:11.324882       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 19:42:11.324892       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 19:42:11.324898       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 19:42:11.327072       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:42:11.328188       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:42:11.331369       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 19:42:11.332638       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-112878" podCIDRs=["10.244.0.0/24"]
	I1017 19:42:11.341737       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:42:11.341759       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 19:42:11.341767       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 19:42:11.350544       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:42:26.273291       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [cbbe41f4f6bc6e7dbf270e63d171276ba8728066d46fffd7716966f4934a8c62] <==
	I1017 19:42:12.695640       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:42:12.762845       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:42:12.863371       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:42:12.863407       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1017 19:42:12.863490       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:42:12.883557       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:42:12.883619       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:42:12.889105       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:42:12.889609       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:42:12.889631       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:42:12.891116       1 config.go:200] "Starting service config controller"
	I1017 19:42:12.891138       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:42:12.891166       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:42:12.891182       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:42:12.891206       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:42:12.891216       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:42:12.892160       1 config.go:309] "Starting node config controller"
	I1017 19:42:12.892251       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:42:12.892263       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:42:12.991324       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:42:12.991367       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 19:42:12.991373       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4d79898da765bff0ee6044c712a8943c207e10cfdd5915c83d306ff83159d17a] <==
	E1017 19:42:04.354218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:42:04.354277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:42:04.354290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:42:04.355265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:42:04.355289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:42:04.355346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:42:04.355361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:42:04.355410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 19:42:04.355432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 19:42:04.355512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:42:04.355516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:42:04.355617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:42:04.355633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 19:42:05.193206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 19:42:05.237739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 19:42:05.253660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:42:05.262203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 19:42:05.313995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:42:05.323333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:42:05.375381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:42:05.382773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 19:42:05.564543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:42:05.692402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1017 19:42:05.696537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1017 19:42:08.750376       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:42:07 default-k8s-diff-port-112878 kubelet[1328]: E1017 19:42:07.776727    1328 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-112878\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-112878"
	Oct 17 19:42:07 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:07.806674    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-112878" podStartSLOduration=1.806627383 podStartE2EDuration="1.806627383s" podCreationTimestamp="2025-10-17 19:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:42:07.791851758 +0000 UTC m=+1.159137399" watchObservedRunningTime="2025-10-17 19:42:07.806627383 +0000 UTC m=+1.173913021"
	Oct 17 19:42:07 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:07.807378    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-112878" podStartSLOduration=1.8073609720000001 podStartE2EDuration="1.807360972s" podCreationTimestamp="2025-10-17 19:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:42:07.807284827 +0000 UTC m=+1.174570461" watchObservedRunningTime="2025-10-17 19:42:07.807360972 +0000 UTC m=+1.174646593"
	Oct 17 19:42:07 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:07.835163    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-112878" podStartSLOduration=1.835140409 podStartE2EDuration="1.835140409s" podCreationTimestamp="2025-10-17 19:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:42:07.822589266 +0000 UTC m=+1.189874906" watchObservedRunningTime="2025-10-17 19:42:07.835140409 +0000 UTC m=+1.202426050"
	Oct 17 19:42:07 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:07.835332    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-112878" podStartSLOduration=1.835325218 podStartE2EDuration="1.835325218s" podCreationTimestamp="2025-10-17 19:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:42:07.834843727 +0000 UTC m=+1.202129367" watchObservedRunningTime="2025-10-17 19:42:07.835325218 +0000 UTC m=+1.202610859"
	Oct 17 19:42:11 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:11.385876    1328 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 17 19:42:11 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:11.386750    1328 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 17 19:42:12 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:12.251594    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/72c3c32f-e74f-46d2-a943-ca279ef893c7-kube-proxy\") pod \"kube-proxy-d2jpw\" (UID: \"72c3c32f-e74f-46d2-a943-ca279ef893c7\") " pod="kube-system/kube-proxy-d2jpw"
	Oct 17 19:42:12 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:12.251644    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72c3c32f-e74f-46d2-a943-ca279ef893c7-xtables-lock\") pod \"kube-proxy-d2jpw\" (UID: \"72c3c32f-e74f-46d2-a943-ca279ef893c7\") " pod="kube-system/kube-proxy-d2jpw"
	Oct 17 19:42:12 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:12.251660    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d53d141-fce2-4ae1-a29b-4cd44dd4fdea-xtables-lock\") pod \"kindnet-xvc9b\" (UID: \"9d53d141-fce2-4ae1-a29b-4cd44dd4fdea\") " pod="kube-system/kindnet-xvc9b"
	Oct 17 19:42:12 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:12.251674    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d53d141-fce2-4ae1-a29b-4cd44dd4fdea-lib-modules\") pod \"kindnet-xvc9b\" (UID: \"9d53d141-fce2-4ae1-a29b-4cd44dd4fdea\") " pod="kube-system/kindnet-xvc9b"
	Oct 17 19:42:12 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:12.251717    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72c3c32f-e74f-46d2-a943-ca279ef893c7-lib-modules\") pod \"kube-proxy-d2jpw\" (UID: \"72c3c32f-e74f-46d2-a943-ca279ef893c7\") " pod="kube-system/kube-proxy-d2jpw"
	Oct 17 19:42:12 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:12.251733    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2thw8\" (UniqueName: \"kubernetes.io/projected/9d53d141-fce2-4ae1-a29b-4cd44dd4fdea-kube-api-access-2thw8\") pod \"kindnet-xvc9b\" (UID: \"9d53d141-fce2-4ae1-a29b-4cd44dd4fdea\") " pod="kube-system/kindnet-xvc9b"
	Oct 17 19:42:12 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:12.251814    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cxmh\" (UniqueName: \"kubernetes.io/projected/72c3c32f-e74f-46d2-a943-ca279ef893c7-kube-api-access-7cxmh\") pod \"kube-proxy-d2jpw\" (UID: \"72c3c32f-e74f-46d2-a943-ca279ef893c7\") " pod="kube-system/kube-proxy-d2jpw"
	Oct 17 19:42:12 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:12.251894    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9d53d141-fce2-4ae1-a29b-4cd44dd4fdea-cni-cfg\") pod \"kindnet-xvc9b\" (UID: \"9d53d141-fce2-4ae1-a29b-4cd44dd4fdea\") " pod="kube-system/kindnet-xvc9b"
	Oct 17 19:42:12 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:12.801524    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-xvc9b" podStartSLOduration=0.801487202 podStartE2EDuration="801.487202ms" podCreationTimestamp="2025-10-17 19:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:42:12.790606987 +0000 UTC m=+6.157892640" watchObservedRunningTime="2025-10-17 19:42:12.801487202 +0000 UTC m=+6.168772842"
	Oct 17 19:42:13 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:13.128303    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d2jpw" podStartSLOduration=1.12827801 podStartE2EDuration="1.12827801s" podCreationTimestamp="2025-10-17 19:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:42:12.801426745 +0000 UTC m=+6.168712386" watchObservedRunningTime="2025-10-17 19:42:13.12827801 +0000 UTC m=+6.495563651"
	Oct 17 19:42:23 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:23.271525    1328 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 17 19:42:23 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:23.329294    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40aad458-e537-456b-8932-594d8406d02d-config-volume\") pod \"coredns-66bc5c9577-vckxk\" (UID: \"40aad458-e537-456b-8932-594d8406d02d\") " pod="kube-system/coredns-66bc5c9577-vckxk"
	Oct 17 19:42:23 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:23.329362    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rjvp\" (UniqueName: \"kubernetes.io/projected/40aad458-e537-456b-8932-594d8406d02d-kube-api-access-6rjvp\") pod \"coredns-66bc5c9577-vckxk\" (UID: \"40aad458-e537-456b-8932-594d8406d02d\") " pod="kube-system/coredns-66bc5c9577-vckxk"
	Oct 17 19:42:23 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:23.329396    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr2qw\" (UniqueName: \"kubernetes.io/projected/7ffb3a0e-4e95-4f0b-940d-c96fec7aa2cc-kube-api-access-lr2qw\") pod \"storage-provisioner\" (UID: \"7ffb3a0e-4e95-4f0b-940d-c96fec7aa2cc\") " pod="kube-system/storage-provisioner"
	Oct 17 19:42:23 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:23.329431    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7ffb3a0e-4e95-4f0b-940d-c96fec7aa2cc-tmp\") pod \"storage-provisioner\" (UID: \"7ffb3a0e-4e95-4f0b-940d-c96fec7aa2cc\") " pod="kube-system/storage-provisioner"
	Oct 17 19:42:23 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:23.831645    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.831623714 podStartE2EDuration="11.831623714s" podCreationTimestamp="2025-10-17 19:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:42:23.820471114 +0000 UTC m=+17.187756755" watchObservedRunningTime="2025-10-17 19:42:23.831623714 +0000 UTC m=+17.198909355"
	Oct 17 19:42:25 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:25.914721    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vckxk" podStartSLOduration=13.914673025 podStartE2EDuration="13.914673025s" podCreationTimestamp="2025-10-17 19:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:42:23.83155729 +0000 UTC m=+17.198842932" watchObservedRunningTime="2025-10-17 19:42:25.914673025 +0000 UTC m=+19.281958675"
	Oct 17 19:42:25 default-k8s-diff-port-112878 kubelet[1328]: I1017 19:42:25.945251    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4gn6\" (UniqueName: \"kubernetes.io/projected/a8098647-3058-4af7-ab8b-7ecb428988e6-kube-api-access-z4gn6\") pod \"busybox\" (UID: \"a8098647-3058-4af7-ab8b-7ecb428988e6\") " pod="default/busybox"
	
	
	==> storage-provisioner [797ffb0028876f4d90066b1e373080570728d97ffd5f630896ef10ac19e01b83] <==
	I1017 19:42:23.681249       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 19:42:23.690831       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 19:42:23.690888       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 19:42:23.693604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:23.700249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 19:42:23.700421       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 19:42:23.700647       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-112878_a4cf890e-230d-4621-8ff7-c24e684e9d96!
	I1017 19:42:23.700608       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f6462b38-005f-4c92-8d22-eea640034e0b", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-112878_a4cf890e-230d-4621-8ff7-c24e684e9d96 became leader
	W1017 19:42:23.705833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:23.710738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 19:42:23.801396       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-112878_a4cf890e-230d-4621-8ff7-c24e684e9d96!
	W1017 19:42:25.714880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:25.721233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:27.725048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:27.729537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:29.733863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:29.738502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:31.742602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:31.747197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:33.752067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:33.758197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:35.762293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:42:35.767620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-112878 -n default-k8s-diff-port-112878
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-112878 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-438547 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-438547 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (314.251716ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:42:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-438547 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-438547
helpers_test.go:243: (dbg) docker inspect newest-cni-438547:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "54bdc696aaf8b8a7838ac3f4a2b8a4d824bac93d2c21012a82e85fba78b1887a",
	        "Created": "2025-10-17T19:42:20.078531564Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 753981,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:42:20.121472635Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/54bdc696aaf8b8a7838ac3f4a2b8a4d824bac93d2c21012a82e85fba78b1887a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/54bdc696aaf8b8a7838ac3f4a2b8a4d824bac93d2c21012a82e85fba78b1887a/hostname",
	        "HostsPath": "/var/lib/docker/containers/54bdc696aaf8b8a7838ac3f4a2b8a4d824bac93d2c21012a82e85fba78b1887a/hosts",
	        "LogPath": "/var/lib/docker/containers/54bdc696aaf8b8a7838ac3f4a2b8a4d824bac93d2c21012a82e85fba78b1887a/54bdc696aaf8b8a7838ac3f4a2b8a4d824bac93d2c21012a82e85fba78b1887a-json.log",
	        "Name": "/newest-cni-438547",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-438547:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-438547",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "54bdc696aaf8b8a7838ac3f4a2b8a4d824bac93d2c21012a82e85fba78b1887a",
	                "LowerDir": "/var/lib/docker/overlay2/b72f80e5d1663080bbedf26b788b7f64c463dad3e253926c9453e0666b33a8a4-init/diff:/var/lib/docker/overlay2/dbfb6a42e05d15debefb7c829b0dbabbe558b70da40f1ab4f30d27e0dda96088/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b72f80e5d1663080bbedf26b788b7f64c463dad3e253926c9453e0666b33a8a4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b72f80e5d1663080bbedf26b788b7f64c463dad3e253926c9453e0666b33a8a4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b72f80e5d1663080bbedf26b788b7f64c463dad3e253926c9453e0666b33a8a4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-438547",
	                "Source": "/var/lib/docker/volumes/newest-cni-438547/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-438547",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-438547",
	                "name.minikube.sigs.k8s.io": "newest-cni-438547",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5d802b82d7ba2802d5f9b7310478b3ff3bec2d4282c1fab9b23a0000914160c0",
	            "SandboxKey": "/var/run/docker/netns/5d802b82d7ba",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-438547": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:96:28:da:e9:e0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "77fe0b660d34aea0508d43e4b8b59b631dd8d785f42a3fec7199378905db0191",
	                    "EndpointID": "121f91f4739f2ddd34907e8bd02d73ae1ad8ba05f08a569815d0e3db5ab9f48e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-438547",
	                        "54bdc696aaf8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-438547 -n newest-cni-438547
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-438547 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-438547 logs -n 25: (2.059970661s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p embed-certs-599709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ start   │ -p embed-certs-599709 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:42 UTC │
	│ image   │ old-k8s-version-907112 image list --format=json                                                                                                                                                                                               │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ pause   │ -p old-k8s-version-907112 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │                     │
	│ delete  │ -p old-k8s-version-907112                                                                                                                                                                                                                     │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-907112                                                                                                                                                                                                                     │ old-k8s-version-907112       │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ delete  │ -p disable-driver-mounts-220565                                                                                                                                                                                                               │ disable-driver-mounts-220565 │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:41 UTC │
	│ start   │ -p default-k8s-diff-port-112878 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:42 UTC │
	│ image   │ no-preload-171807 image list --format=json                                                                                                                                                                                                    │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ pause   │ -p no-preload-171807 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ delete  │ -p no-preload-171807                                                                                                                                                                                                                          │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ delete  │ -p no-preload-171807                                                                                                                                                                                                                          │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ start   │ -p newest-cni-438547 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438547            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ start   │ -p kubernetes-upgrade-137244 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-137244    │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ start   │ -p kubernetes-upgrade-137244 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-137244    │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ image   │ embed-certs-599709 image list --format=json                                                                                                                                                                                                   │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ pause   │ -p embed-certs-599709 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-112878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-112878 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-137244                                                                                                                                                                                                                  │ kubernetes-upgrade-137244    │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ delete  │ -p embed-certs-599709                                                                                                                                                                                                                         │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ start   │ -p auto-448344 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ delete  │ -p embed-certs-599709                                                                                                                                                                                                                         │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ start   │ -p enable-default-cni-448344 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio                                                                               │ enable-default-cni-448344    │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-438547 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-438547            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:42:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:42:42.936999  761258 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:42:42.937385  761258 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:42:42.937402  761258 out.go:374] Setting ErrFile to fd 2...
	I1017 19:42:42.937409  761258 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:42:42.937755  761258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:42:42.938536  761258 out.go:368] Setting JSON to false
	I1017 19:42:42.940146  761258 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12302,"bootTime":1760717861,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:42:42.940278  761258 start.go:141] virtualization: kvm guest
	I1017 19:42:42.945618  761258 out.go:179] * [enable-default-cni-448344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:42:42.947914  761258 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:42:42.947907  761258 notify.go:220] Checking for updates...
	I1017 19:42:42.950778  761258 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:42:42.952457  761258 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:42:42.956540  761258 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 19:42:42.958044  761258 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:42:42.959468  761258 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:42:39.928970  753072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:42:40.428535  753072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:42:40.928907  753072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:42:41.428910  753072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:42:41.928391  753072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:42:42.428940  753072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:42:42.928889  753072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:42:43.029566  753072 kubeadm.go:1113] duration metric: took 4.206061124s to wait for elevateKubeSystemPrivileges
	I1017 19:42:43.029621  753072 kubeadm.go:402] duration metric: took 17.059281023s to StartCluster
	I1017 19:42:43.029648  753072 settings.go:142] acquiring lock: {Name:mkb8ebc6edbbb6915dd74086f502bcc2721555a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:42:43.029759  753072 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:42:43.031096  753072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/kubeconfig: {Name:mkc99c1a086f83f30612e2820a6063c20b9217b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:42:43.031391  753072 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:42:43.031549  753072 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 19:42:43.031975  753072 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 19:42:43.032093  753072 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-438547"
	I1017 19:42:43.032123  753072 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-438547"
	I1017 19:42:43.032123  753072 addons.go:69] Setting default-storageclass=true in profile "newest-cni-438547"
	I1017 19:42:43.032167  753072 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-438547"
	I1017 19:42:43.032176  753072 host.go:66] Checking if "newest-cni-438547" exists ...
	I1017 19:42:43.032607  753072 cli_runner.go:164] Run: docker container inspect newest-cni-438547 --format={{.State.Status}}
	I1017 19:42:43.032664  753072 config.go:182] Loaded profile config "newest-cni-438547": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:42:43.033731  753072 cli_runner.go:164] Run: docker container inspect newest-cni-438547 --format={{.State.Status}}
	I1017 19:42:43.036122  753072 out.go:179] * Verifying Kubernetes components...
	I1017 19:42:43.038334  753072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:42:43.063856  753072 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 19:42:42.962106  761258 config.go:182] Loaded profile config "auto-448344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:42:42.962355  761258 config.go:182] Loaded profile config "default-k8s-diff-port-112878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:42:42.962679  761258 config.go:182] Loaded profile config "newest-cni-438547": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:42:42.962963  761258 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:42:42.992106  761258 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:42:42.992301  761258 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:42:43.075342  761258 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:93 SystemTime:2025-10-17 19:42:43.060276648 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:42:43.075623  761258 docker.go:318] overlay module found
	I1017 19:42:43.078101  761258 out.go:179] * Using the docker driver based on user configuration
	I1017 19:42:43.079430  761258 start.go:305] selected driver: docker
	I1017 19:42:43.079486  761258 start.go:925] validating driver "docker" against <nil>
	I1017 19:42:43.079517  761258 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:42:43.080365  761258 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:42:43.175250  761258 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:91 SystemTime:2025-10-17 19:42:43.162639611 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:42:43.175482  761258 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	E1017 19:42:43.175796  761258 start_flags.go:481] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1017 19:42:43.175828  761258 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:42:43.177854  761258 out.go:179] * Using Docker driver with root privileges
	I1017 19:42:43.182811  761258 cni.go:84] Creating CNI manager for "bridge"
	I1017 19:42:43.182850  761258 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1017 19:42:43.182959  761258 start.go:349] cluster config:
	{Name:enable-default-cni-448344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:enable-default-cni-448344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:42:43.184522  761258 out.go:179] * Starting "enable-default-cni-448344" primary control-plane node in "enable-default-cni-448344" cluster
	I1017 19:42:43.185879  761258 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:42:43.187089  761258 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:42:43.188174  761258 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:42:43.188226  761258 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:42:43.188241  761258 cache.go:58] Caching tarball of preloaded images
	I1017 19:42:43.188354  761258 preload.go:233] Found /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:42:43.188351  761258 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:42:43.188370  761258 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:42:43.188595  761258 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/enable-default-cni-448344/config.json ...
	I1017 19:42:43.188628  761258 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/enable-default-cni-448344/config.json: {Name:mkabd87a00bcba868cbff11b6965d8f00836243c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:42:43.218311  761258 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:42:43.218340  761258 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:42:43.218361  761258 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:42:43.218394  761258 start.go:360] acquireMachinesLock for enable-default-cni-448344: {Name:mk04b33239475adf049e35a0d07646043e769f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:42:43.218509  761258 start.go:364] duration metric: took 90.02µs to acquireMachinesLock for "enable-default-cni-448344"
	I1017 19:42:43.218539  761258 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-448344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:enable-default-cni-448344 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:42:43.218644  761258 start.go:125] createHost starting for "" (driver="docker")
	I1017 19:42:43.065178  753072 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:42:43.065201  753072 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 19:42:43.065262  753072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:42:43.066037  753072 addons.go:238] Setting addon default-storageclass=true in "newest-cni-438547"
	I1017 19:42:43.066088  753072 host.go:66] Checking if "newest-cni-438547" exists ...
	I1017 19:42:43.066586  753072 cli_runner.go:164] Run: docker container inspect newest-cni-438547 --format={{.State.Status}}
	I1017 19:42:43.101101  753072 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 19:42:43.101127  753072 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 19:42:43.101195  753072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:42:43.103409  753072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/newest-cni-438547/id_rsa Username:docker}
	I1017 19:42:43.144308  753072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/newest-cni-438547/id_rsa Username:docker}
	I1017 19:42:43.152903  753072 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 19:42:43.215131  753072 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:42:43.239149  753072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:42:43.271241  753072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 19:42:43.362738  753072 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1017 19:42:43.365378  753072 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:42:43.365453  753072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:42:43.931363  753072 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-438547" context rescaled to 1 replicas
	I1017 19:42:44.118961  753072 api_server.go:72] duration metric: took 1.087526995s to wait for apiserver process to appear ...
	I1017 19:42:44.118997  753072 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:42:44.119023  753072 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 19:42:44.126305  753072 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1017 19:42:44.128119  753072 api_server.go:141] control plane version: v1.34.1
	I1017 19:42:44.128155  753072 api_server.go:131] duration metric: took 9.14964ms to wait for apiserver health ...
	I1017 19:42:44.128166  753072 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:42:44.133821  753072 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1017 19:42:44.135389  753072 system_pods.go:59] 8 kube-system pods found
	I1017 19:42:44.135430  753072 system_pods.go:61] "coredns-66bc5c9577-8pfhn" [6d0a8a45-e3f8-4e59-b735-4f1236cf5953] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 19:42:44.135442  753072 system_pods.go:61] "etcd-newest-cni-438547" [aaf7399b-5274-44fa-a929-a515b9341276] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:42:44.135455  753072 system_pods.go:61] "kindnet-nhg7f" [368f40c9-2ab9-4d9d-9310-950d3371f4c0] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 19:42:44.135464  753072 system_pods.go:61] "kube-apiserver-newest-cni-438547" [25c05b7c-518e-4bc1-94cc-e2a8a04f104b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:42:44.135473  753072 system_pods.go:61] "kube-controller-manager-newest-cni-438547" [eba5d490-129b-4739-95bd-e10a4fd73c40] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:42:44.135481  753072 system_pods.go:61] "kube-proxy-zfk4z" [a38161c3-4097-4e85-b391-e3b730dd90b6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 19:42:44.135493  753072 system_pods.go:61] "kube-scheduler-newest-cni-438547" [8210e114-0804-429b-8518-30042567db4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:42:44.135500  753072 system_pods.go:61] "storage-provisioner" [39d961dc-a8fd-4066-b46e-3e02ec6d04f6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 19:42:44.135512  753072 system_pods.go:74] duration metric: took 7.338145ms to wait for pod list to return data ...
	I1017 19:42:44.135533  753072 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:42:44.135885  753072 addons.go:514] duration metric: took 1.1039147s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1017 19:42:44.138267  753072 default_sa.go:45] found service account: "default"
	I1017 19:42:44.138294  753072 default_sa.go:55] duration metric: took 2.752607ms for default service account to be created ...
	I1017 19:42:44.138311  753072 kubeadm.go:586] duration metric: took 1.10688355s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 19:42:44.138338  753072 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:42:44.141040  753072 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 19:42:44.141070  753072 node_conditions.go:123] node cpu capacity is 8
	I1017 19:42:44.141087  753072 node_conditions.go:105] duration metric: took 2.742351ms to run NodePressure ...
	I1017 19:42:44.141103  753072 start.go:241] waiting for startup goroutines ...
	I1017 19:42:44.141118  753072 start.go:246] waiting for cluster config update ...
	I1017 19:42:44.141133  753072 start.go:255] writing updated cluster config ...
	I1017 19:42:44.141480  753072 ssh_runner.go:195] Run: rm -f paused
	I1017 19:42:44.211506  753072 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 19:42:44.228222  753072 out.go:179] * Done! kubectl is now configured to use "newest-cni-438547" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.518724504Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.55973182Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6507d422-4a7a-4ea0-aae0-5b8b904b664f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.568291337Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.569392142Z" level=info msg="Ran pod sandbox 499856d6d9468a5697782b0f2c8441b3a785162a8b8572d3b0ea37e8c9d91288 with infra container: kube-system/kube-proxy-zfk4z/POD" id=6507d422-4a7a-4ea0-aae0-5b8b904b664f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.570575447Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=44f35aca-03ae-4fbe-8a48-7626f20db245 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.571044196Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=fd4a4fba-3fa9-438b-b421-b52794d9fe8e name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.572993823Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=7c801d1f-766d-41ea-abd8-3ef581064251 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.573151833Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.573898165Z" level=info msg="Ran pod sandbox 146130f095213470ef593ef179f69cfe9b3b4cac7fd7d2bf0df38b1ce873d643 with infra container: kube-system/kindnet-nhg7f/POD" id=44f35aca-03ae-4fbe-8a48-7626f20db245 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.575210279Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=980aa72d-c9d5-4f2d-b782-759bd3addc07 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.576271762Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=17473170-15f2-4a56-a64a-d70d4a02544f name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.577605815Z" level=info msg="Creating container: kube-system/kube-proxy-zfk4z/kube-proxy" id=1fc37cb6-c986-4184-bd07-04d25940b893 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.57792991Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.586584861Z" level=info msg="Creating container: kube-system/kindnet-nhg7f/kindnet-cni" id=95c53913-7d1b-420d-b80f-52615edcf97b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.587775842Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.588435262Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.589187396Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.593955309Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.595039483Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.63319398Z" level=info msg="Created container 617ef6ebb7328471dbd05df4d5c5b0ec86a5b7949052e0b9a1f6754d8d3e6730: kube-system/kindnet-nhg7f/kindnet-cni" id=95c53913-7d1b-420d-b80f-52615edcf97b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.634140342Z" level=info msg="Starting container: 617ef6ebb7328471dbd05df4d5c5b0ec86a5b7949052e0b9a1f6754d8d3e6730" id=a1925fae-23c3-42a8-b22f-733d9a3f4b45 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.635086777Z" level=info msg="Created container 4e9748f62cb0c9e5bc23f4cb3a70af1557d7ac23e059ac7925d1c8219f895852: kube-system/kube-proxy-zfk4z/kube-proxy" id=1fc37cb6-c986-4184-bd07-04d25940b893 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.636480718Z" level=info msg="Starting container: 4e9748f62cb0c9e5bc23f4cb3a70af1557d7ac23e059ac7925d1c8219f895852" id=bb300cb9-b90a-4660-8bb8-f3fa452637b2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.639140157Z" level=info msg="Started container" PID=1634 containerID=617ef6ebb7328471dbd05df4d5c5b0ec86a5b7949052e0b9a1f6754d8d3e6730 description=kube-system/kindnet-nhg7f/kindnet-cni id=a1925fae-23c3-42a8-b22f-733d9a3f4b45 name=/runtime.v1.RuntimeService/StartContainer sandboxID=146130f095213470ef593ef179f69cfe9b3b4cac7fd7d2bf0df38b1ce873d643
	Oct 17 19:42:44 newest-cni-438547 crio[778]: time="2025-10-17T19:42:44.643568087Z" level=info msg="Started container" PID=1633 containerID=4e9748f62cb0c9e5bc23f4cb3a70af1557d7ac23e059ac7925d1c8219f895852 description=kube-system/kube-proxy-zfk4z/kube-proxy id=bb300cb9-b90a-4660-8bb8-f3fa452637b2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=499856d6d9468a5697782b0f2c8441b3a785162a8b8572d3b0ea37e8c9d91288
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	617ef6ebb7328       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   146130f095213       kindnet-nhg7f                               kube-system
	4e9748f62cb0c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   499856d6d9468       kube-proxy-zfk4z                            kube-system
	8277a241cd4f8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   13 seconds ago      Running             kube-controller-manager   0                   ab3c2cb6e7248       kube-controller-manager-newest-cni-438547   kube-system
	d9f50d067d90a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   13 seconds ago      Running             etcd                      0                   420afb240611b       etcd-newest-cni-438547                      kube-system
	e0aeae8cf4b40       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   13 seconds ago      Running             kube-apiserver            0                   dca823bd9303a       kube-apiserver-newest-cni-438547            kube-system
	c284c9c709003       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   13 seconds ago      Running             kube-scheduler            0                   04637bfa797a6       kube-scheduler-newest-cni-438547            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-438547
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-438547
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=newest-cni-438547
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_42_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:42:35 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-438547
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:42:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:42:37 +0000   Fri, 17 Oct 2025 19:42:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:42:37 +0000   Fri, 17 Oct 2025 19:42:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:42:37 +0000   Fri, 17 Oct 2025 19:42:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 17 Oct 2025 19:42:37 +0000   Fri, 17 Oct 2025 19:42:32 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-438547
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                6f16ffd1-311d-4f27-b795-37ce231ef7a2
	  Boot ID:                    c8616e78-d085-41cd-a329-f2bcfd9cfa15
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-438547                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10s
	  kube-system                 kindnet-nhg7f                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-438547             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-438547    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-zfk4z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-438547             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 1s                 kube-proxy       
	  Normal  Starting                 14s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14s (x8 over 14s)  kubelet          Node newest-cni-438547 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14s (x8 over 14s)  kubelet          Node newest-cni-438547 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14s (x8 over 14s)  kubelet          Node newest-cni-438547 status is now: NodeHasSufficientPID
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s                 kubelet          Node newest-cni-438547 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s                 kubelet          Node newest-cni-438547 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s                 kubelet          Node newest-cni-438547 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-438547 event: Registered Node newest-cni-438547 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 d1 49 91 03 c2 08 06
	[  +0.000804] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 a9 2b 44 da ae 08 06
	[Oct17 18:59] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022229] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023876] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.024898] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +2.047801] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +4.031525] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:00] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +16.382262] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +32.252567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	
	
	==> etcd [d9f50d067d90a5c9ff1b6dd3167897e809d36448ebc9eb032c70f01573da3aff] <==
	{"level":"info","ts":"2025-10-17T19:42:43.783980Z","caller":"traceutil/trace.go:172","msg":"trace[944876227] transaction","detail":"{read_only:false; response_revision:354; number_of_response:1; }","duration":"115.047193ms","start":"2025-10-17T19:42:43.668920Z","end":"2025-10-17T19:42:43.783967Z","steps":["trace[944876227] 'process raft request'  (duration: 114.719869ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:43.784083Z","caller":"traceutil/trace.go:172","msg":"trace[714369419] transaction","detail":"{read_only:false; response_revision:356; number_of_response:1; }","duration":"114.761789ms","start":"2025-10-17T19:42:43.669310Z","end":"2025-10-17T19:42:43.784072Z","steps":["trace[714369419] 'process raft request'  (duration: 114.588635ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:43.784156Z","caller":"traceutil/trace.go:172","msg":"trace[1915321847] transaction","detail":"{read_only:false; response_revision:355; number_of_response:1; }","duration":"114.859736ms","start":"2025-10-17T19:42:43.669284Z","end":"2025-10-17T19:42:43.784144Z","steps":["trace[1915321847] 'process raft request'  (duration: 114.551745ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:43.919119Z","caller":"traceutil/trace.go:172","msg":"trace[1382570153] linearizableReadLoop","detail":"{readStateIndex:366; appliedIndex:366; }","duration":"129.258645ms","start":"2025-10-17T19:42:43.789813Z","end":"2025-10-17T19:42:43.919071Z","steps":["trace[1382570153] 'read index received'  (duration: 129.246186ms)","trace[1382570153] 'applied index is now lower than readState.Index'  (duration: 10.724µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T19:42:43.925024Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.180777ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking\" limit:1 ","response":"range_response_count:1 size:370"}
	{"level":"info","ts":"2025-10-17T19:42:43.925049Z","caller":"traceutil/trace.go:172","msg":"trace[1137310725] transaction","detail":"{read_only:false; response_revision:357; number_of_response:1; }","duration":"136.66123ms","start":"2025-10-17T19:42:43.788372Z","end":"2025-10-17T19:42:43.925033Z","steps":["trace[1137310725] 'process raft request'  (duration: 130.777777ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:43.925111Z","caller":"traceutil/trace.go:172","msg":"trace[1179612568] range","detail":"{range_begin:/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking; range_end:; response_count:1; response_revision:356; }","duration":"135.269679ms","start":"2025-10-17T19:42:43.789807Z","end":"2025-10-17T19:42:43.925077Z","steps":["trace[1179612568] 'agreement among raft nodes before linearized reading'  (duration: 129.376685ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:42:43.926605Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.672914ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4059"}
	{"level":"info","ts":"2025-10-17T19:42:43.926663Z","caller":"traceutil/trace.go:172","msg":"trace[1892934231] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:357; }","duration":"131.748425ms","start":"2025-10-17T19:42:43.794903Z","end":"2025-10-17T19:42:43.926652Z","steps":["trace[1892934231] 'agreement among raft nodes before linearized reading'  (duration: 131.548662ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:42:43.926877Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.515174ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-cidrs-controller\" limit:1 ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-10-17T19:42:43.927223Z","caller":"traceutil/trace.go:172","msg":"trace[1251505360] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-cidrs-controller; range_end:; response_count:1; response_revision:359; }","duration":"115.874254ms","start":"2025-10-17T19:42:43.811339Z","end":"2025-10-17T19:42:43.927214Z","steps":["trace[1251505360] 'agreement among raft nodes before linearized reading'  (duration: 115.458134ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:43.926974Z","caller":"traceutil/trace.go:172","msg":"trace[157709226] transaction","detail":"{read_only:false; response_revision:363; number_of_response:1; }","duration":"130.340456ms","start":"2025-10-17T19:42:43.796616Z","end":"2025-10-17T19:42:43.926957Z","steps":["trace[157709226] 'process raft request'  (duration: 130.302993ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:43.927001Z","caller":"traceutil/trace.go:172","msg":"trace[2074188482] transaction","detail":"{read_only:false; response_revision:358; number_of_response:1; }","duration":"134.942299ms","start":"2025-10-17T19:42:43.792040Z","end":"2025-10-17T19:42:43.926983Z","steps":["trace[2074188482] 'process raft request'  (duration: 134.613022ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:43.927018Z","caller":"traceutil/trace.go:172","msg":"trace[528235262] transaction","detail":"{read_only:false; response_revision:360; number_of_response:1; }","duration":"134.353587ms","start":"2025-10-17T19:42:43.792659Z","end":"2025-10-17T19:42:43.927012Z","steps":["trace[528235262] 'process raft request'  (duration: 134.148802ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:43.927092Z","caller":"traceutil/trace.go:172","msg":"trace[691787346] transaction","detail":"{read_only:false; response_revision:359; number_of_response:1; }","duration":"134.867741ms","start":"2025-10-17T19:42:43.792212Z","end":"2025-10-17T19:42:43.927080Z","steps":["trace[691787346] 'process raft request'  (duration: 134.554035ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:43.927138Z","caller":"traceutil/trace.go:172","msg":"trace[1024831879] transaction","detail":"{read_only:false; response_revision:362; number_of_response:1; }","duration":"133.963576ms","start":"2025-10-17T19:42:43.793164Z","end":"2025-10-17T19:42:43.927128Z","steps":["trace[1024831879] 'process raft request'  (duration: 133.72192ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:43.927163Z","caller":"traceutil/trace.go:172","msg":"trace[95425773] transaction","detail":"{read_only:false; response_revision:361; number_of_response:1; }","duration":"134.111249ms","start":"2025-10-17T19:42:43.793047Z","end":"2025-10-17T19:42:43.927158Z","steps":["trace[95425773] 'process raft request'  (duration: 133.802269ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:44.056738Z","caller":"traceutil/trace.go:172","msg":"trace[1077270558] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"122.076374ms","start":"2025-10-17T19:42:43.934637Z","end":"2025-10-17T19:42:44.056714Z","steps":["trace[1077270558] 'process raft request'  (duration: 118.0912ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:44.056784Z","caller":"traceutil/trace.go:172","msg":"trace[1236413645] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"121.507989ms","start":"2025-10-17T19:42:43.935263Z","end":"2025-10-17T19:42:44.056771Z","steps":["trace[1236413645] 'process raft request'  (duration: 121.352632ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:44.056973Z","caller":"traceutil/trace.go:172","msg":"trace[913964274] transaction","detail":"{read_only:false; response_revision:370; number_of_response:1; }","duration":"117.087688ms","start":"2025-10-17T19:42:43.939869Z","end":"2025-10-17T19:42:44.056956Z","steps":["trace[913964274] 'process raft request'  (duration: 116.979963ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:44.057039Z","caller":"traceutil/trace.go:172","msg":"trace[1000896065] transaction","detail":"{read_only:false; response_revision:368; number_of_response:1; }","duration":"118.594537ms","start":"2025-10-17T19:42:43.938435Z","end":"2025-10-17T19:42:44.057029Z","steps":["trace[1000896065] 'process raft request'  (duration: 118.33494ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:44.057076Z","caller":"traceutil/trace.go:172","msg":"trace[1289382252] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"119.213314ms","start":"2025-10-17T19:42:43.937850Z","end":"2025-10-17T19:42:44.057064Z","steps":["trace[1289382252] 'process raft request'  (duration: 118.892525ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:44.057106Z","caller":"traceutil/trace.go:172","msg":"trace[819201078] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"120.883808ms","start":"2025-10-17T19:42:43.936154Z","end":"2025-10-17T19:42:44.057038Z","steps":["trace[819201078] 'process raft request'  (duration: 120.548613ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:44.057152Z","caller":"traceutil/trace.go:172","msg":"trace[1753888354] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"117.956531ms","start":"2025-10-17T19:42:43.939177Z","end":"2025-10-17T19:42:44.057133Z","steps":["trace[1753888354] 'process raft request'  (duration: 117.636821ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:42:45.059957Z","caller":"traceutil/trace.go:172","msg":"trace[857593322] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"120.156612ms","start":"2025-10-17T19:42:44.939780Z","end":"2025-10-17T19:42:45.059937Z","steps":["trace[857593322] 'process raft request'  (duration: 120.008721ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:42:46 up  3:25,  0 user,  load average: 3.85, 3.40, 2.20
	Linux newest-cni-438547 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [617ef6ebb7328471dbd05df4d5c5b0ec86a5b7949052e0b9a1f6754d8d3e6730] <==
	I1017 19:42:44.883296       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 19:42:44.883576       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1017 19:42:44.883787       1 main.go:148] setting mtu 1500 for CNI 
	I1017 19:42:44.883817       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 19:42:44.883847       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T19:42:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 19:42:45.101215       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 19:42:45.101247       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 19:42:45.101259       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:42:45.101390       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [e0aeae8cf4b40b826ae1cf9398f60963049854d0159f2bb5ae9fd34a9dae2bda] <==
	I1017 19:42:35.051027       1 policy_source.go:240] refreshing policies
	E1017 19:42:35.091296       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1017 19:42:35.129554       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 19:42:35.134555       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:42:35.135479       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1017 19:42:35.151551       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:42:35.154496       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 19:42:35.237590       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 19:42:35.931386       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 19:42:35.935934       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 19:42:35.935959       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:42:36.697767       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 19:42:36.760990       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 19:42:36.838822       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 19:42:36.848432       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1017 19:42:36.850073       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:42:36.856013       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 19:42:37.715127       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 19:42:37.902382       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 19:42:37.914119       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 19:42:37.924916       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 19:42:43.467999       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 19:42:43.792532       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1017 19:42:44.057810       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:42:44.070246       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [8277a241cd4f84e8396a5b60097f08f2e409f294ac91dce8a3d880af006d94b4] <==
	I1017 19:42:42.713914       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 19:42:42.713999       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-438547"
	I1017 19:42:42.714042       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1017 19:42:42.715146       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 19:42:42.715473       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1017 19:42:42.715507       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 19:42:42.715543       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 19:42:42.715741       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 19:42:42.715818       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 19:42:42.715804       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 19:42:42.716302       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 19:42:42.716385       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 19:42:42.716952       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 19:42:42.720429       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 19:42:42.720476       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 19:42:42.721642       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 19:42:42.724252       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:42:42.724262       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 19:42:42.734570       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 19:42:42.750933       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 19:42:42.752606       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1017 19:42:42.761226       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:42:42.767542       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:42:42.767630       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 19:42:42.767640       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [4e9748f62cb0c9e5bc23f4cb3a70af1557d7ac23e059ac7925d1c8219f895852] <==
	I1017 19:42:44.703669       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:42:44.783660       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:42:44.884104       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:42:44.884163       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1017 19:42:44.884314       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:42:44.905642       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:42:44.905722       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:42:44.912058       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:42:44.912591       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:42:44.912638       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:42:44.914353       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:42:44.914382       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:42:44.914451       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:42:44.914461       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:42:44.914481       1 config.go:200] "Starting service config controller"
	I1017 19:42:44.914487       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:42:44.915034       1 config.go:309] "Starting node config controller"
	I1017 19:42:44.915693       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:42:44.915855       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:42:45.014497       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 19:42:45.014563       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:42:45.014536       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c284c9c709003ab2c0972691d08356ddf70678f99bc72410bbec80fc4c708050] <==
	E1017 19:42:34.992215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 19:42:34.992675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:42:34.992815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:42:34.993041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:42:34.992890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:42:34.992840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:42:34.993181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:42:35.815134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 19:42:35.828798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:42:35.936138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:42:35.938180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 19:42:35.944935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:42:35.988819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:42:36.067770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:42:36.119723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 19:42:36.120195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:42:36.143578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 19:42:36.174741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:42:36.197976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:42:36.216702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 19:42:36.269044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:42:36.311353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1017 19:42:36.330262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 19:42:36.335596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1017 19:42:38.783909       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:42:38 newest-cni-438547 kubelet[1328]: I1017 19:42:38.083994    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/21df47e05de3707ed4e607f63556d336-flexvolume-dir\") pod \"kube-controller-manager-newest-cni-438547\" (UID: \"21df47e05de3707ed4e607f63556d336\") " pod="kube-system/kube-controller-manager-newest-cni-438547"
	Oct 17 19:42:38 newest-cni-438547 kubelet[1328]: I1017 19:42:38.084045    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/ec3eb2e7deda8bf7d890d8257f3a266e-etcd-data\") pod \"etcd-newest-cni-438547\" (UID: \"ec3eb2e7deda8bf7d890d8257f3a266e\") " pod="kube-system/etcd-newest-cni-438547"
	Oct 17 19:42:38 newest-cni-438547 kubelet[1328]: I1017 19:42:38.084073    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7afbdf56682fa0fe88a4452d5300fc77-k8s-certs\") pod \"kube-apiserver-newest-cni-438547\" (UID: \"7afbdf56682fa0fe88a4452d5300fc77\") " pod="kube-system/kube-apiserver-newest-cni-438547"
	Oct 17 19:42:38 newest-cni-438547 kubelet[1328]: I1017 19:42:38.764630    1328 apiserver.go:52] "Watching apiserver"
	Oct 17 19:42:38 newest-cni-438547 kubelet[1328]: I1017 19:42:38.782801    1328 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 17 19:42:38 newest-cni-438547 kubelet[1328]: I1017 19:42:38.836579    1328 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-438547"
	Oct 17 19:42:38 newest-cni-438547 kubelet[1328]: I1017 19:42:38.837072    1328 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-438547"
	Oct 17 19:42:38 newest-cni-438547 kubelet[1328]: E1017 19:42:38.847403    1328 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-438547\" already exists" pod="kube-system/kube-apiserver-newest-cni-438547"
	Oct 17 19:42:38 newest-cni-438547 kubelet[1328]: E1017 19:42:38.848979    1328 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-438547\" already exists" pod="kube-system/etcd-newest-cni-438547"
	Oct 17 19:42:38 newest-cni-438547 kubelet[1328]: I1017 19:42:38.899927    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-438547" podStartSLOduration=2.899902176 podStartE2EDuration="2.899902176s" podCreationTimestamp="2025-10-17 19:42:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:42:38.899192113 +0000 UTC m=+1.224466131" watchObservedRunningTime="2025-10-17 19:42:38.899902176 +0000 UTC m=+1.225176254"
	Oct 17 19:42:38 newest-cni-438547 kubelet[1328]: I1017 19:42:38.933744    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-438547" podStartSLOduration=1.933714557 podStartE2EDuration="1.933714557s" podCreationTimestamp="2025-10-17 19:42:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:42:38.918566575 +0000 UTC m=+1.243840592" watchObservedRunningTime="2025-10-17 19:42:38.933714557 +0000 UTC m=+1.258988587"
	Oct 17 19:42:38 newest-cni-438547 kubelet[1328]: I1017 19:42:38.933898    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-438547" podStartSLOduration=1.933890029 podStartE2EDuration="1.933890029s" podCreationTimestamp="2025-10-17 19:42:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:42:38.93293899 +0000 UTC m=+1.258213007" watchObservedRunningTime="2025-10-17 19:42:38.933890029 +0000 UTC m=+1.259164045"
	Oct 17 19:42:38 newest-cni-438547 kubelet[1328]: I1017 19:42:38.962883    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-438547" podStartSLOduration=1.962855767 podStartE2EDuration="1.962855767s" podCreationTimestamp="2025-10-17 19:42:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:42:38.946895373 +0000 UTC m=+1.272169389" watchObservedRunningTime="2025-10-17 19:42:38.962855767 +0000 UTC m=+1.288129785"
	Oct 17 19:42:42 newest-cni-438547 kubelet[1328]: I1017 19:42:42.763153    1328 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 17 19:42:42 newest-cni-438547 kubelet[1328]: I1017 19:42:42.763998    1328 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 17 19:42:44 newest-cni-438547 kubelet[1328]: I1017 19:42:44.128328    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a38161c3-4097-4e85-b391-e3b730dd90b6-kube-proxy\") pod \"kube-proxy-zfk4z\" (UID: \"a38161c3-4097-4e85-b391-e3b730dd90b6\") " pod="kube-system/kube-proxy-zfk4z"
	Oct 17 19:42:44 newest-cni-438547 kubelet[1328]: I1017 19:42:44.130913    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a38161c3-4097-4e85-b391-e3b730dd90b6-xtables-lock\") pod \"kube-proxy-zfk4z\" (UID: \"a38161c3-4097-4e85-b391-e3b730dd90b6\") " pod="kube-system/kube-proxy-zfk4z"
	Oct 17 19:42:44 newest-cni-438547 kubelet[1328]: I1017 19:42:44.132131    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a38161c3-4097-4e85-b391-e3b730dd90b6-lib-modules\") pod \"kube-proxy-zfk4z\" (UID: \"a38161c3-4097-4e85-b391-e3b730dd90b6\") " pod="kube-system/kube-proxy-zfk4z"
	Oct 17 19:42:44 newest-cni-438547 kubelet[1328]: I1017 19:42:44.132196    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xvw7\" (UniqueName: \"kubernetes.io/projected/a38161c3-4097-4e85-b391-e3b730dd90b6-kube-api-access-6xvw7\") pod \"kube-proxy-zfk4z\" (UID: \"a38161c3-4097-4e85-b391-e3b730dd90b6\") " pod="kube-system/kube-proxy-zfk4z"
	Oct 17 19:42:44 newest-cni-438547 kubelet[1328]: I1017 19:42:44.132225    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/368f40c9-2ab9-4d9d-9310-950d3371f4c0-cni-cfg\") pod \"kindnet-nhg7f\" (UID: \"368f40c9-2ab9-4d9d-9310-950d3371f4c0\") " pod="kube-system/kindnet-nhg7f"
	Oct 17 19:42:44 newest-cni-438547 kubelet[1328]: I1017 19:42:44.132247    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/368f40c9-2ab9-4d9d-9310-950d3371f4c0-lib-modules\") pod \"kindnet-nhg7f\" (UID: \"368f40c9-2ab9-4d9d-9310-950d3371f4c0\") " pod="kube-system/kindnet-nhg7f"
	Oct 17 19:42:44 newest-cni-438547 kubelet[1328]: I1017 19:42:44.132279    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/368f40c9-2ab9-4d9d-9310-950d3371f4c0-xtables-lock\") pod \"kindnet-nhg7f\" (UID: \"368f40c9-2ab9-4d9d-9310-950d3371f4c0\") " pod="kube-system/kindnet-nhg7f"
	Oct 17 19:42:44 newest-cni-438547 kubelet[1328]: I1017 19:42:44.132302    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtzzc\" (UniqueName: \"kubernetes.io/projected/368f40c9-2ab9-4d9d-9310-950d3371f4c0-kube-api-access-wtzzc\") pod \"kindnet-nhg7f\" (UID: \"368f40c9-2ab9-4d9d-9310-950d3371f4c0\") " pod="kube-system/kindnet-nhg7f"
	Oct 17 19:42:44 newest-cni-438547 kubelet[1328]: I1017 19:42:44.867793    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-nhg7f" podStartSLOduration=1.867772572 podStartE2EDuration="1.867772572s" podCreationTimestamp="2025-10-17 19:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:42:44.86761927 +0000 UTC m=+7.192893288" watchObservedRunningTime="2025-10-17 19:42:44.867772572 +0000 UTC m=+7.193046589"
	Oct 17 19:42:46 newest-cni-438547 kubelet[1328]: I1017 19:42:46.771018    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zfk4z" podStartSLOduration=3.77099719 podStartE2EDuration="3.77099719s" podCreationTimestamp="2025-10-17 19:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 19:42:44.935933562 +0000 UTC m=+7.261207579" watchObservedRunningTime="2025-10-17 19:42:46.77099719 +0000 UTC m=+9.096271250"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-438547 -n newest-cni-438547
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-438547 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-8pfhn storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-438547 describe pod coredns-66bc5c9577-8pfhn storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-438547 describe pod coredns-66bc5c9577-8pfhn storage-provisioner: exit status 1 (256.740266ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-8pfhn" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-438547 describe pod coredns-66bc5c9577-8pfhn storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-438547 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-438547 --alsologtostderr -v=1: exit status 80 (2.341197169s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-438547 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:43:18.971162  772709 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:43:18.971473  772709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:43:18.971485  772709 out.go:374] Setting ErrFile to fd 2...
	I1017 19:43:18.971490  772709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:43:18.971805  772709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:43:18.972074  772709 out.go:368] Setting JSON to false
	I1017 19:43:18.972123  772709 mustload.go:65] Loading cluster: newest-cni-438547
	I1017 19:43:18.972530  772709 config.go:182] Loaded profile config "newest-cni-438547": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:43:18.973087  772709 cli_runner.go:164] Run: docker container inspect newest-cni-438547 --format={{.State.Status}}
	I1017 19:43:18.994261  772709 host.go:66] Checking if "newest-cni-438547" exists ...
	I1017 19:43:18.994577  772709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:43:19.066062  772709 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-17 19:43:19.054468986 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:43:19.066746  772709 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-438547 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 19:43:19.068867  772709 out.go:179] * Pausing node newest-cni-438547 ... 
	I1017 19:43:19.070264  772709 host.go:66] Checking if "newest-cni-438547" exists ...
	I1017 19:43:19.070542  772709 ssh_runner.go:195] Run: systemctl --version
	I1017 19:43:19.070598  772709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:19.090300  772709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/newest-cni-438547/id_rsa Username:docker}
	I1017 19:43:19.188398  772709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:43:19.202392  772709 pause.go:52] kubelet running: true
	I1017 19:43:19.202450  772709 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 19:43:19.358651  772709 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 19:43:19.358779  772709 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 19:43:19.432589  772709 cri.go:89] found id: "5cb60d7d09aab27fd00d9f02862df4772fb81077b845a75cc383b5fcfabe2bef"
	I1017 19:43:19.432610  772709 cri.go:89] found id: "396b79a83b6aad7be20450af8a558a28d65313c75e489cd893f4c91b119849ac"
	I1017 19:43:19.432614  772709 cri.go:89] found id: "783ad2b5346ea181a472270de81a22e9136094d7a4a6901197f9b3b4dd831dd6"
	I1017 19:43:19.432618  772709 cri.go:89] found id: "fba4a1410021bdf673cba310189091795eb97198d5419e4df6a5ea9b8ceea611"
	I1017 19:43:19.432620  772709 cri.go:89] found id: "2e544eb21d59ec702243e34c0c9957da878518767a5d668acdbf48ab0caa8515"
	I1017 19:43:19.432624  772709 cri.go:89] found id: "8140e5435bac0f77a7bf313d441166129425c73e3e1d7fabfc13834d3cfa44bd"
	I1017 19:43:19.432627  772709 cri.go:89] found id: ""
	I1017 19:43:19.432664  772709 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:43:19.445841  772709 retry.go:31] will retry after 215.989982ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:43:19Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:43:19.662388  772709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:43:19.676461  772709 pause.go:52] kubelet running: false
	I1017 19:43:19.676517  772709 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 19:43:19.790965  772709 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 19:43:19.791037  772709 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 19:43:19.865425  772709 cri.go:89] found id: "5cb60d7d09aab27fd00d9f02862df4772fb81077b845a75cc383b5fcfabe2bef"
	I1017 19:43:19.865449  772709 cri.go:89] found id: "396b79a83b6aad7be20450af8a558a28d65313c75e489cd893f4c91b119849ac"
	I1017 19:43:19.865453  772709 cri.go:89] found id: "783ad2b5346ea181a472270de81a22e9136094d7a4a6901197f9b3b4dd831dd6"
	I1017 19:43:19.865456  772709 cri.go:89] found id: "fba4a1410021bdf673cba310189091795eb97198d5419e4df6a5ea9b8ceea611"
	I1017 19:43:19.865460  772709 cri.go:89] found id: "2e544eb21d59ec702243e34c0c9957da878518767a5d668acdbf48ab0caa8515"
	I1017 19:43:19.865464  772709 cri.go:89] found id: "8140e5435bac0f77a7bf313d441166129425c73e3e1d7fabfc13834d3cfa44bd"
	I1017 19:43:19.865469  772709 cri.go:89] found id: ""
	I1017 19:43:19.865532  772709 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:43:19.878906  772709 retry.go:31] will retry after 477.990878ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:43:19Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:43:20.357663  772709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:43:20.371876  772709 pause.go:52] kubelet running: false
	I1017 19:43:20.371942  772709 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 19:43:20.495589  772709 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 19:43:20.495660  772709 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 19:43:20.569998  772709 cri.go:89] found id: "5cb60d7d09aab27fd00d9f02862df4772fb81077b845a75cc383b5fcfabe2bef"
	I1017 19:43:20.570017  772709 cri.go:89] found id: "396b79a83b6aad7be20450af8a558a28d65313c75e489cd893f4c91b119849ac"
	I1017 19:43:20.570020  772709 cri.go:89] found id: "783ad2b5346ea181a472270de81a22e9136094d7a4a6901197f9b3b4dd831dd6"
	I1017 19:43:20.570024  772709 cri.go:89] found id: "fba4a1410021bdf673cba310189091795eb97198d5419e4df6a5ea9b8ceea611"
	I1017 19:43:20.570027  772709 cri.go:89] found id: "2e544eb21d59ec702243e34c0c9957da878518767a5d668acdbf48ab0caa8515"
	I1017 19:43:20.570030  772709 cri.go:89] found id: "8140e5435bac0f77a7bf313d441166129425c73e3e1d7fabfc13834d3cfa44bd"
	I1017 19:43:20.570033  772709 cri.go:89] found id: ""
	I1017 19:43:20.570088  772709 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:43:20.584232  772709 retry.go:31] will retry after 437.18798ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:43:20Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:43:21.021844  772709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:43:21.037030  772709 pause.go:52] kubelet running: false
	I1017 19:43:21.037094  772709 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 19:43:21.152477  772709 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 19:43:21.152553  772709 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 19:43:21.227632  772709 cri.go:89] found id: "5cb60d7d09aab27fd00d9f02862df4772fb81077b845a75cc383b5fcfabe2bef"
	I1017 19:43:21.227658  772709 cri.go:89] found id: "396b79a83b6aad7be20450af8a558a28d65313c75e489cd893f4c91b119849ac"
	I1017 19:43:21.227663  772709 cri.go:89] found id: "783ad2b5346ea181a472270de81a22e9136094d7a4a6901197f9b3b4dd831dd6"
	I1017 19:43:21.227667  772709 cri.go:89] found id: "fba4a1410021bdf673cba310189091795eb97198d5419e4df6a5ea9b8ceea611"
	I1017 19:43:21.227669  772709 cri.go:89] found id: "2e544eb21d59ec702243e34c0c9957da878518767a5d668acdbf48ab0caa8515"
	I1017 19:43:21.227672  772709 cri.go:89] found id: "8140e5435bac0f77a7bf313d441166129425c73e3e1d7fabfc13834d3cfa44bd"
	I1017 19:43:21.227674  772709 cri.go:89] found id: ""
	I1017 19:43:21.227738  772709 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:43:21.243826  772709 out.go:203] 
	W1017 19:43:21.245321  772709 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:43:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:43:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:43:21.245343  772709 out.go:285] * 
	* 
	W1017 19:43:21.251067  772709 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:43:21.252706  772709 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-438547 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-438547
helpers_test.go:243: (dbg) docker inspect newest-cni-438547:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "54bdc696aaf8b8a7838ac3f4a2b8a4d824bac93d2c21012a82e85fba78b1887a",
	        "Created": "2025-10-17T19:42:20.078531564Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 769397,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:43:06.714154888Z",
	            "FinishedAt": "2025-10-17T19:43:05.504987922Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/54bdc696aaf8b8a7838ac3f4a2b8a4d824bac93d2c21012a82e85fba78b1887a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/54bdc696aaf8b8a7838ac3f4a2b8a4d824bac93d2c21012a82e85fba78b1887a/hostname",
	        "HostsPath": "/var/lib/docker/containers/54bdc696aaf8b8a7838ac3f4a2b8a4d824bac93d2c21012a82e85fba78b1887a/hosts",
	        "LogPath": "/var/lib/docker/containers/54bdc696aaf8b8a7838ac3f4a2b8a4d824bac93d2c21012a82e85fba78b1887a/54bdc696aaf8b8a7838ac3f4a2b8a4d824bac93d2c21012a82e85fba78b1887a-json.log",
	        "Name": "/newest-cni-438547",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-438547:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-438547",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "54bdc696aaf8b8a7838ac3f4a2b8a4d824bac93d2c21012a82e85fba78b1887a",
	                "LowerDir": "/var/lib/docker/overlay2/b72f80e5d1663080bbedf26b788b7f64c463dad3e253926c9453e0666b33a8a4-init/diff:/var/lib/docker/overlay2/dbfb6a42e05d15debefb7c829b0dbabbe558b70da40f1ab4f30d27e0dda96088/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b72f80e5d1663080bbedf26b788b7f64c463dad3e253926c9453e0666b33a8a4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b72f80e5d1663080bbedf26b788b7f64c463dad3e253926c9453e0666b33a8a4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b72f80e5d1663080bbedf26b788b7f64c463dad3e253926c9453e0666b33a8a4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-438547",
	                "Source": "/var/lib/docker/volumes/newest-cni-438547/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-438547",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-438547",
	                "name.minikube.sigs.k8s.io": "newest-cni-438547",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0dd8aa4c2675f15f3781d0eecbf6f70953ab9d3b361340373f28f3590cc132a9",
	            "SandboxKey": "/var/run/docker/netns/0dd8aa4c2675",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33478"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33479"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33482"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33480"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33481"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-438547": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:bb:8e:98:d6:41",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "77fe0b660d34aea0508d43e4b8b59b631dd8d785f42a3fec7199378905db0191",
	                    "EndpointID": "37fb774b74d52c6730b0bf79698c19a63992086e370b75c3a3dedb7e9fd56598",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-438547",
	                        "54bdc696aaf8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-438547 -n newest-cni-438547
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-438547 -n newest-cni-438547: exit status 2 (342.521334ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-438547 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-112878 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:42 UTC │
	│ image   │ no-preload-171807 image list --format=json                                                                                                                                                                                                    │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ pause   │ -p no-preload-171807 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ delete  │ -p no-preload-171807                                                                                                                                                                                                                          │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ delete  │ -p no-preload-171807                                                                                                                                                                                                                          │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ start   │ -p newest-cni-438547 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438547            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ start   │ -p kubernetes-upgrade-137244 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-137244    │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ start   │ -p kubernetes-upgrade-137244 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-137244    │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ image   │ embed-certs-599709 image list --format=json                                                                                                                                                                                                   │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ pause   │ -p embed-certs-599709 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-112878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-112878 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ delete  │ -p kubernetes-upgrade-137244                                                                                                                                                                                                                  │ kubernetes-upgrade-137244    │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ delete  │ -p embed-certs-599709                                                                                                                                                                                                                         │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ start   │ -p auto-448344 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ delete  │ -p embed-certs-599709                                                                                                                                                                                                                         │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ start   │ -p enable-default-cni-448344 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio                                                                               │ enable-default-cni-448344    │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-438547 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-438547            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ stop    │ -p newest-cni-438547 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-438547            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:43 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-112878 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ start   │ -p default-k8s-diff-port-112878 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-438547 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-438547            │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ start   │ -p newest-cni-438547 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438547            │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ image   │ newest-cni-438547 image list --format=json                                                                                                                                                                                                    │ newest-cni-438547            │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ pause   │ -p newest-cni-438547 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-438547            │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:43:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:43:06.319972  769029 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:43:06.322927  769029 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:43:06.322950  769029 out.go:374] Setting ErrFile to fd 2...
	I1017 19:43:06.322959  769029 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:43:06.323496  769029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:43:06.324497  769029 out.go:368] Setting JSON to false
	I1017 19:43:06.326615  769029 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12325,"bootTime":1760717861,"procs":338,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:43:06.326843  769029 start.go:141] virtualization: kvm guest
	I1017 19:43:06.329640  769029 out.go:179] * [newest-cni-438547] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:43:06.332298  769029 notify.go:220] Checking for updates...
	I1017 19:43:06.332377  769029 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:43:06.333918  769029 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:43:06.335499  769029 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:43:06.337621  769029 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 19:43:06.338878  769029 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:43:06.341038  769029 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:43:05.599158  766638 node_ready.go:49] node "default-k8s-diff-port-112878" is "Ready"
	I1017 19:43:05.599201  766638 node_ready.go:38] duration metric: took 1.614895769s for node "default-k8s-diff-port-112878" to be "Ready" ...
	I1017 19:43:05.599222  766638 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:43:05.599276  766638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:43:06.409938  766638 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.422807633s)
	I1017 19:43:06.410181  766638 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.392451555s)
	I1017 19:43:06.410315  766638 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.223990594s)
	I1017 19:43:06.410346  766638 api_server.go:72] duration metric: took 2.641009673s to wait for apiserver process to appear ...
	I1017 19:43:06.410362  766638 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:43:06.410382  766638 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1017 19:43:06.411583  766638 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-112878 addons enable metrics-server
	
	I1017 19:43:06.421116  766638 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 19:43:06.421169  766638 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 19:43:06.430215  766638 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1017 19:43:06.343391  769029 config.go:182] Loaded profile config "newest-cni-438547": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:43:06.344195  769029 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:43:06.388187  769029 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:43:06.388395  769029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:43:06.465305  769029 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-17 19:43:06.454372433 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:43:06.465427  769029 docker.go:318] overlay module found
	I1017 19:43:06.467425  769029 out.go:179] * Using the docker driver based on existing profile
	I1017 19:43:06.468742  769029 start.go:305] selected driver: docker
	I1017 19:43:06.468762  769029 start.go:925] validating driver "docker" against &{Name:newest-cni-438547 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438547 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:43:06.468874  769029 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:43:06.469639  769029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:43:06.535884  769029 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-17 19:43:06.523397698 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:43:06.536253  769029 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 19:43:06.536287  769029 cni.go:84] Creating CNI manager for ""
	I1017 19:43:06.536352  769029 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:43:06.536424  769029 start.go:349] cluster config:
	{Name:newest-cni-438547 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:43:06.539967  769029 out.go:179] * Starting "newest-cni-438547" primary control-plane node in "newest-cni-438547" cluster
	I1017 19:43:06.541332  769029 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:43:06.543215  769029 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:43:06.544450  769029 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:43:06.544507  769029 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:43:06.544519  769029 cache.go:58] Caching tarball of preloaded images
	I1017 19:43:06.544568  769029 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:43:06.544635  769029 preload.go:233] Found /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:43:06.544652  769029 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:43:06.544804  769029 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/newest-cni-438547/config.json ...
	I1017 19:43:06.569782  769029 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:43:06.569806  769029 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:43:06.569828  769029 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:43:06.569861  769029 start.go:360] acquireMachinesLock for newest-cni-438547: {Name:mkf0920afa8583ecdc28963ff3ff9f81a225f71e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:43:06.569949  769029 start.go:364] duration metric: took 50.025µs to acquireMachinesLock for "newest-cni-438547"
	I1017 19:43:06.569980  769029 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:43:06.569988  769029 fix.go:54] fixHost starting: 
	I1017 19:43:06.570298  769029 cli_runner.go:164] Run: docker container inspect newest-cni-438547 --format={{.State.Status}}
	I1017 19:43:06.592361  769029 fix.go:112] recreateIfNeeded on newest-cni-438547: state=Stopped err=<nil>
	W1017 19:43:06.592405  769029 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:43:05.372229  760682 out.go:252]   - Configuring RBAC rules ...
	I1017 19:43:05.372414  760682 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 19:43:05.376948  760682 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 19:43:05.382790  760682 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 19:43:05.386960  760682 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 19:43:05.390167  760682 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 19:43:05.396957  760682 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 19:43:05.721175  760682 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 19:43:06.187601  760682 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 19:43:06.721932  760682 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 19:43:06.722778  760682 kubeadm.go:318] 
	I1017 19:43:06.722963  760682 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 19:43:06.722996  760682 kubeadm.go:318] 
	I1017 19:43:06.723171  760682 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 19:43:06.723205  760682 kubeadm.go:318] 
	I1017 19:43:06.723260  760682 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 19:43:06.723340  760682 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 19:43:06.723434  760682 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 19:43:06.723485  760682 kubeadm.go:318] 
	I1017 19:43:06.723560  760682 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 19:43:06.723790  760682 kubeadm.go:318] 
	I1017 19:43:06.723858  760682 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 19:43:06.723864  760682 kubeadm.go:318] 
	I1017 19:43:06.723922  760682 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 19:43:06.724006  760682 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 19:43:06.724081  760682 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 19:43:06.724086  760682 kubeadm.go:318] 
	I1017 19:43:06.724178  760682 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 19:43:06.724263  760682 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 19:43:06.724268  760682 kubeadm.go:318] 
	I1017 19:43:06.724432  760682 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 8swro8.qh9sbwijb0bw9qm9 \
	I1017 19:43:06.724557  760682 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ae4b222593b9932ac318f80ad834fe09d4c8ed481133016b5c410bf2757b648e \
	I1017 19:43:06.724583  760682 kubeadm.go:318] 	--control-plane 
	I1017 19:43:06.724588  760682 kubeadm.go:318] 
	I1017 19:43:06.724690  760682 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 19:43:06.724700  760682 kubeadm.go:318] 
	I1017 19:43:06.724792  760682 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 8swro8.qh9sbwijb0bw9qm9 \
	I1017 19:43:06.724916  760682 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ae4b222593b9932ac318f80ad834fe09d4c8ed481133016b5c410bf2757b648e 
	I1017 19:43:06.729000  760682 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1017 19:43:06.729180  760682 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 19:43:06.729266  760682 cni.go:84] Creating CNI manager for ""
	I1017 19:43:06.729280  760682 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:43:06.732501  760682 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 19:43:06.733852  760682 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 19:43:06.741033  760682 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 19:43:06.741061  760682 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 19:43:06.760640  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 19:43:05.956523  761258 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1017 19:43:05.973781  761258 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1017 19:43:06.003226  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-448344 minikube.k8s.io/updated_at=2025_10_17T19_43_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d minikube.k8s.io/name=enable-default-cni-448344 minikube.k8s.io/primary=true
	I1017 19:43:06.003649  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:06.003796  761258 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 19:43:06.182165  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:06.182271  761258 ops.go:34] apiserver oom_adj: -16
	I1017 19:43:06.682849  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:07.182571  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:07.682305  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:06.431624  766638 addons.go:514] duration metric: took 2.662217197s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1017 19:43:06.910912  766638 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1017 19:43:06.916636  766638 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1017 19:43:06.917918  766638 api_server.go:141] control plane version: v1.34.1
	I1017 19:43:06.917956  766638 api_server.go:131] duration metric: took 507.587162ms to wait for apiserver health ...
	I1017 19:43:06.917965  766638 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:43:06.922920  766638 system_pods.go:59] 8 kube-system pods found
	I1017 19:43:06.922977  766638 system_pods.go:61] "coredns-66bc5c9577-vckxk" [40aad458-e537-456b-8932-594d8406d02d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:43:06.922993  766638 system_pods.go:61] "etcd-default-k8s-diff-port-112878" [0ef85596-9da6-4a2d-9d8c-1007c10aa5c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:43:06.923004  766638 system_pods.go:61] "kindnet-xvc9b" [9d53d141-fce2-4ae1-a29b-4cd44dd4fdea] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 19:43:06.923019  766638 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-112878" [3611109b-a6cb-4b9e-8ef4-8cd67a6b6d5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:43:06.923028  766638 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-112878" [70168103-f8fc-46fc-8378-869752d9d9f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:43:06.923040  766638 system_pods.go:61] "kube-proxy-d2jpw" [72c3c32f-e74f-46d2-a943-ca279ef893c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 19:43:06.923095  766638 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-112878" [bec0ce92-ff18-4f92-9085-c601198dacc4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:43:06.923109  766638 system_pods.go:61] "storage-provisioner" [7ffb3a0e-4e95-4f0b-940d-c96fec7aa2cc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:43:06.923118  766638 system_pods.go:74] duration metric: took 5.145823ms to wait for pod list to return data ...
	I1017 19:43:06.923136  766638 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:43:06.925918  766638 default_sa.go:45] found service account: "default"
	I1017 19:43:06.925944  766638 default_sa.go:55] duration metric: took 2.800342ms for default service account to be created ...
	I1017 19:43:06.925959  766638 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 19:43:06.929147  766638 system_pods.go:86] 8 kube-system pods found
	I1017 19:43:06.929207  766638 system_pods.go:89] "coredns-66bc5c9577-vckxk" [40aad458-e537-456b-8932-594d8406d02d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:43:06.929266  766638 system_pods.go:89] "etcd-default-k8s-diff-port-112878" [0ef85596-9da6-4a2d-9d8c-1007c10aa5c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:43:06.929291  766638 system_pods.go:89] "kindnet-xvc9b" [9d53d141-fce2-4ae1-a29b-4cd44dd4fdea] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 19:43:06.929299  766638 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-112878" [3611109b-a6cb-4b9e-8ef4-8cd67a6b6d5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:43:06.929309  766638 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-112878" [70168103-f8fc-46fc-8378-869752d9d9f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:43:06.929316  766638 system_pods.go:89] "kube-proxy-d2jpw" [72c3c32f-e74f-46d2-a943-ca279ef893c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 19:43:06.929333  766638 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-112878" [bec0ce92-ff18-4f92-9085-c601198dacc4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:43:06.929341  766638 system_pods.go:89] "storage-provisioner" [7ffb3a0e-4e95-4f0b-940d-c96fec7aa2cc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:43:06.929351  766638 system_pods.go:126] duration metric: took 3.385286ms to wait for k8s-apps to be running ...
	I1017 19:43:06.929373  766638 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:43:06.929433  766638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:43:06.944989  766638 system_svc.go:56] duration metric: took 15.602005ms WaitForService to wait for kubelet
	I1017 19:43:06.945024  766638 kubeadm.go:586] duration metric: took 3.175691952s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:43:06.945048  766638 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:43:06.948234  766638 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 19:43:06.948272  766638 node_conditions.go:123] node cpu capacity is 8
	I1017 19:43:06.948291  766638 node_conditions.go:105] duration metric: took 3.236546ms to run NodePressure ...
	I1017 19:43:06.948308  766638 start.go:241] waiting for startup goroutines ...
	I1017 19:43:06.948320  766638 start.go:246] waiting for cluster config update ...
	I1017 19:43:06.948338  766638 start.go:255] writing updated cluster config ...
	I1017 19:43:06.948713  766638 ssh_runner.go:195] Run: rm -f paused
	I1017 19:43:06.955232  766638 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:43:06.959214  766638 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vckxk" in "kube-system" namespace to be "Ready" or be gone ...
	W1017 19:43:08.964913  766638 pod_ready.go:104] pod "coredns-66bc5c9577-vckxk" is not "Ready", error: <nil>
	I1017 19:43:08.182714  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:08.682813  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:09.182970  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:09.682324  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:10.183036  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:10.682889  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:10.780997  761258 kubeadm.go:1113] duration metric: took 4.777398825s to wait for elevateKubeSystemPrivileges
	I1017 19:43:10.781043  761258 kubeadm.go:402] duration metric: took 16.212143267s to StartCluster
	I1017 19:43:10.781082  761258 settings.go:142] acquiring lock: {Name:mkb8ebc6edbbb6915dd74086f502bcc2721555a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:43:10.781175  761258 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:43:10.782495  761258 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/kubeconfig: {Name:mkc99c1a086f83f30612e2820a6063c20b9217b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:43:10.782794  761258 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:43:10.782815  761258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 19:43:10.782906  761258 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 19:43:10.782997  761258 config.go:182] Loaded profile config "enable-default-cni-448344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:43:10.783006  761258 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-448344"
	I1017 19:43:10.783008  761258 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-448344"
	I1017 19:43:10.783025  761258 addons.go:238] Setting addon storage-provisioner=true in "enable-default-cni-448344"
	I1017 19:43:10.783035  761258 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-448344"
	I1017 19:43:10.783063  761258 host.go:66] Checking if "enable-default-cni-448344" exists ...
	I1017 19:43:10.783424  761258 cli_runner.go:164] Run: docker container inspect enable-default-cni-448344 --format={{.State.Status}}
	I1017 19:43:10.783605  761258 cli_runner.go:164] Run: docker container inspect enable-default-cni-448344 --format={{.State.Status}}
	I1017 19:43:10.784791  761258 out.go:179] * Verifying Kubernetes components...
	I1017 19:43:10.786202  761258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:43:10.810001  761258 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 19:43:07.050227  760682 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 19:43:07.050291  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:07.050325  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-448344 minikube.k8s.io/updated_at=2025_10_17T19_43_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d minikube.k8s.io/name=auto-448344 minikube.k8s.io/primary=true
	I1017 19:43:07.134622  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:07.148253  760682 ops.go:34] apiserver oom_adj: -16
	I1017 19:43:07.636724  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:08.134822  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:08.635079  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:09.135302  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:09.634674  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:10.135730  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:10.634924  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:11.135610  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:11.276013  760682 kubeadm.go:1113] duration metric: took 4.225780364s to wait for elevateKubeSystemPrivileges
	I1017 19:43:11.276050  760682 kubeadm.go:402] duration metric: took 16.677676822s to StartCluster
	I1017 19:43:11.276074  760682 settings.go:142] acquiring lock: {Name:mkb8ebc6edbbb6915dd74086f502bcc2721555a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:43:11.276139  760682 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:43:11.277979  760682 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/kubeconfig: {Name:mkc99c1a086f83f30612e2820a6063c20b9217b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:43:11.278260  760682 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:43:11.278979  760682 config.go:182] Loaded profile config "auto-448344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:43:11.279031  760682 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 19:43:11.279110  760682 addons.go:69] Setting storage-provisioner=true in profile "auto-448344"
	I1017 19:43:11.279129  760682 addons.go:238] Setting addon storage-provisioner=true in "auto-448344"
	I1017 19:43:11.279160  760682 host.go:66] Checking if "auto-448344" exists ...
	I1017 19:43:11.279742  760682 cli_runner.go:164] Run: docker container inspect auto-448344 --format={{.State.Status}}
	I1017 19:43:11.279926  760682 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 19:43:11.280373  760682 addons.go:69] Setting default-storageclass=true in profile "auto-448344"
	I1017 19:43:11.280397  760682 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-448344"
	I1017 19:43:11.280756  760682 cli_runner.go:164] Run: docker container inspect auto-448344 --format={{.State.Status}}
	I1017 19:43:11.282501  760682 out.go:179] * Verifying Kubernetes components...
	I1017 19:43:06.594508  769029 out.go:252] * Restarting existing docker container for "newest-cni-438547" ...
	I1017 19:43:06.594591  769029 cli_runner.go:164] Run: docker start newest-cni-438547
	I1017 19:43:07.007217  769029 cli_runner.go:164] Run: docker container inspect newest-cni-438547 --format={{.State.Status}}
	I1017 19:43:07.031616  769029 kic.go:430] container "newest-cni-438547" state is running.
	I1017 19:43:07.032141  769029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438547
	I1017 19:43:07.053527  769029 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/newest-cni-438547/config.json ...
	I1017 19:43:07.053907  769029 machine.go:93] provisionDockerMachine start ...
	I1017 19:43:07.054003  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:07.077736  769029 main.go:141] libmachine: Using SSH client type: native
	I1017 19:43:07.078071  769029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1017 19:43:07.078091  769029 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:43:07.078915  769029 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35926->127.0.0.1:33478: read: connection reset by peer
	I1017 19:43:10.244014  769029 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-438547
	
	I1017 19:43:10.244048  769029 ubuntu.go:182] provisioning hostname "newest-cni-438547"
	I1017 19:43:10.244117  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:10.272575  769029 main.go:141] libmachine: Using SSH client type: native
	I1017 19:43:10.273026  769029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1017 19:43:10.273046  769029 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-438547 && echo "newest-cni-438547" | sudo tee /etc/hostname
	I1017 19:43:10.464806  769029 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-438547
	
	I1017 19:43:10.464892  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:10.490588  769029 main.go:141] libmachine: Using SSH client type: native
	I1017 19:43:10.491359  769029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1017 19:43:10.491396  769029 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-438547' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-438547/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-438547' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:43:10.649799  769029 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:43:10.649835  769029 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-492109/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-492109/.minikube}
	I1017 19:43:10.649871  769029 ubuntu.go:190] setting up certificates
	I1017 19:43:10.649885  769029 provision.go:84] configureAuth start
	I1017 19:43:10.649950  769029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438547
	I1017 19:43:10.677218  769029 provision.go:143] copyHostCerts
	I1017 19:43:10.677285  769029 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem, removing ...
	I1017 19:43:10.677310  769029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem
	I1017 19:43:10.677396  769029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem (1078 bytes)
	I1017 19:43:10.677535  769029 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem, removing ...
	I1017 19:43:10.677545  769029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem
	I1017 19:43:10.677589  769029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem (1123 bytes)
	I1017 19:43:10.677679  769029 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem, removing ...
	I1017 19:43:10.677701  769029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem
	I1017 19:43:10.677745  769029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem (1679 bytes)
	I1017 19:43:10.677888  769029 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem org=jenkins.newest-cni-438547 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-438547]
	I1017 19:43:10.852497  769029 provision.go:177] copyRemoteCerts
	I1017 19:43:10.852627  769029 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:43:10.852692  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:10.884273  769029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/newest-cni-438547/id_rsa Username:docker}
	I1017 19:43:11.006525  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 19:43:11.042956  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 19:43:11.076327  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:43:11.108001  769029 provision.go:87] duration metric: took 458.094838ms to configureAuth
	I1017 19:43:11.108040  769029 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:43:11.108300  769029 config.go:182] Loaded profile config "newest-cni-438547": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:43:11.108449  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:11.135940  769029 main.go:141] libmachine: Using SSH client type: native
	I1017 19:43:11.136247  769029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1017 19:43:11.136267  769029 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:43:11.284740  760682 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:43:11.322018  760682 addons.go:238] Setting addon default-storageclass=true in "auto-448344"
	I1017 19:43:11.322105  760682 host.go:66] Checking if "auto-448344" exists ...
	I1017 19:43:11.322599  760682 cli_runner.go:164] Run: docker container inspect auto-448344 --format={{.State.Status}}
	I1017 19:43:11.324535  760682 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 19:43:10.810740  761258 addons.go:238] Setting addon default-storageclass=true in "enable-default-cni-448344"
	I1017 19:43:10.810793  761258 host.go:66] Checking if "enable-default-cni-448344" exists ...
	I1017 19:43:10.811313  761258 cli_runner.go:164] Run: docker container inspect enable-default-cni-448344 --format={{.State.Status}}
	I1017 19:43:10.811674  761258 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:43:10.811813  761258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 19:43:10.811913  761258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-448344
	I1017 19:43:10.855355  761258 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 19:43:10.855380  761258 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 19:43:10.855445  761258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-448344
	I1017 19:43:10.855760  761258 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/enable-default-cni-448344/id_rsa Username:docker}
	I1017 19:43:10.886087  761258 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/enable-default-cni-448344/id_rsa Username:docker}
	I1017 19:43:10.929245  761258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 19:43:10.994421  761258 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:43:11.007316  761258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:43:11.043725  761258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 19:43:11.274930  761258 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1017 19:43:11.276268  761258 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-448344" to be "Ready" ...
	I1017 19:43:11.309388  761258 node_ready.go:49] node "enable-default-cni-448344" is "Ready"
	I1017 19:43:11.309433  761258 node_ready.go:38] duration metric: took 33.139971ms for node "enable-default-cni-448344" to be "Ready" ...
	I1017 19:43:11.309471  761258 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:43:11.309674  761258 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:43:11.694162  761258 api_server.go:72] duration metric: took 911.320616ms to wait for apiserver process to appear ...
	I1017 19:43:11.694193  761258 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:43:11.694215  761258 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1017 19:43:11.710573  761258 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1017 19:43:11.712375  761258 api_server.go:141] control plane version: v1.34.1
	I1017 19:43:11.712413  761258 api_server.go:131] duration metric: took 18.211885ms to wait for apiserver health ...
	I1017 19:43:11.712424  761258 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:43:11.713505  761258 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1017 19:43:11.326008  760682 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:43:11.326029  760682 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 19:43:11.326104  760682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-448344
	I1017 19:43:11.366713  760682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/auto-448344/id_rsa Username:docker}
	I1017 19:43:11.371949  760682 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 19:43:11.372028  760682 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 19:43:11.372154  760682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-448344
	I1017 19:43:11.405329  760682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/auto-448344/id_rsa Username:docker}
	I1017 19:43:11.529442  760682 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:43:11.529658  760682 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 19:43:11.557258  760682 node_ready.go:35] waiting up to 15m0s for node "auto-448344" to be "Ready" ...
	I1017 19:43:11.588966  760682 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 19:43:11.593476  760682 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:43:11.866988  760682 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1017 19:43:12.112780  760682 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1017 19:43:11.715142  761258 addons.go:514] duration metric: took 932.242897ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1017 19:43:11.722883  761258 system_pods.go:59] 8 kube-system pods found
	I1017 19:43:11.722947  761258 system_pods.go:61] "coredns-66bc5c9577-frpnt" [7983e2df-2598-460b-8f12-006554076f00] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:43:11.722961  761258 system_pods.go:61] "coredns-66bc5c9577-r5brm" [d4d75925-5c02-4ebc-8def-89369fde949b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:43:11.722980  761258 system_pods.go:61] "etcd-enable-default-cni-448344" [4bcbcf5f-6477-4042-96aa-264c9f5cdb46] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:43:11.722999  761258 system_pods.go:61] "kube-apiserver-enable-default-cni-448344" [92060ff1-2299-4284-bf52-3550b852c490] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:43:11.723028  761258 system_pods.go:61] "kube-controller-manager-enable-default-cni-448344" [3fc1f75c-e665-4582-b13a-99cf265b4a6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:43:11.723044  761258 system_pods.go:61] "kube-proxy-djghb" [c5d7ed52-990c-41a1-91d6-8d934775891b] Running
	I1017 19:43:11.723052  761258 system_pods.go:61] "kube-scheduler-enable-default-cni-448344" [af441ecd-8c3f-478b-8d76-74e270caa7f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:43:11.723059  761258 system_pods.go:61] "storage-provisioner" [2024497c-706b-4a1e-8ea6-ca5118bac96a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:43:11.723083  761258 system_pods.go:74] duration metric: took 10.650615ms to wait for pod list to return data ...
	I1017 19:43:11.723102  761258 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:43:11.728867  761258 default_sa.go:45] found service account: "default"
	I1017 19:43:11.728918  761258 default_sa.go:55] duration metric: took 5.806956ms for default service account to be created ...
	I1017 19:43:11.728933  761258 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 19:43:11.735569  761258 system_pods.go:86] 8 kube-system pods found
	I1017 19:43:11.735616  761258 system_pods.go:89] "coredns-66bc5c9577-frpnt" [7983e2df-2598-460b-8f12-006554076f00] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:43:11.735629  761258 system_pods.go:89] "coredns-66bc5c9577-r5brm" [d4d75925-5c02-4ebc-8def-89369fde949b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:43:11.735639  761258 system_pods.go:89] "etcd-enable-default-cni-448344" [4bcbcf5f-6477-4042-96aa-264c9f5cdb46] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:43:11.735649  761258 system_pods.go:89] "kube-apiserver-enable-default-cni-448344" [92060ff1-2299-4284-bf52-3550b852c490] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:43:11.735661  761258 system_pods.go:89] "kube-controller-manager-enable-default-cni-448344" [3fc1f75c-e665-4582-b13a-99cf265b4a6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:43:11.735668  761258 system_pods.go:89] "kube-proxy-djghb" [c5d7ed52-990c-41a1-91d6-8d934775891b] Running
	I1017 19:43:11.735676  761258 system_pods.go:89] "kube-scheduler-enable-default-cni-448344" [af441ecd-8c3f-478b-8d76-74e270caa7f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:43:11.735693  761258 system_pods.go:89] "storage-provisioner" [2024497c-706b-4a1e-8ea6-ca5118bac96a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:43:11.735705  761258 system_pods.go:126] duration metric: took 6.763012ms to wait for k8s-apps to be running ...
	I1017 19:43:11.735715  761258 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:43:11.735772  761258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:43:11.757352  761258 system_svc.go:56] duration metric: took 21.628393ms WaitForService to wait for kubelet
	I1017 19:43:11.757386  761258 kubeadm.go:586] duration metric: took 974.552141ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:43:11.757408  761258 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:43:11.763786  761258 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 19:43:11.763817  761258 node_conditions.go:123] node cpu capacity is 8
	I1017 19:43:11.763834  761258 node_conditions.go:105] duration metric: took 6.419185ms to run NodePressure ...
	I1017 19:43:11.763912  761258 start.go:241] waiting for startup goroutines ...
	I1017 19:43:11.780086  761258 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-448344" context rescaled to 1 replicas
	I1017 19:43:11.780205  761258 start.go:246] waiting for cluster config update ...
	I1017 19:43:11.780227  761258 start.go:255] writing updated cluster config ...
	I1017 19:43:11.780586  761258 ssh_runner.go:195] Run: rm -f paused
	I1017 19:43:11.788295  761258 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:43:11.793776  761258 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-frpnt" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:43:11.588902  769029 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:43:11.588929  769029 machine.go:96] duration metric: took 4.53500188s to provisionDockerMachine
	I1017 19:43:11.588943  769029 start.go:293] postStartSetup for "newest-cni-438547" (driver="docker")
	I1017 19:43:11.588972  769029 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:43:11.589045  769029 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:43:11.589091  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:11.618485  769029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/newest-cni-438547/id_rsa Username:docker}
	I1017 19:43:11.748813  769029 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:43:11.755793  769029 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:43:11.755828  769029 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:43:11.755842  769029 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/addons for local assets ...
	I1017 19:43:11.755912  769029 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/files for local assets ...
	I1017 19:43:11.756008  769029 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem -> 4957252.pem in /etc/ssl/certs
	I1017 19:43:11.756148  769029 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:43:11.773489  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem --> /etc/ssl/certs/4957252.pem (1708 bytes)
	I1017 19:43:11.805867  769029 start.go:296] duration metric: took 216.903683ms for postStartSetup
	I1017 19:43:11.805973  769029 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:43:11.806027  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:11.833201  769029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/newest-cni-438547/id_rsa Username:docker}
	I1017 19:43:11.942583  769029 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:43:11.952562  769029 fix.go:56] duration metric: took 5.382564977s for fixHost
	I1017 19:43:11.952597  769029 start.go:83] releasing machines lock for "newest-cni-438547", held for 5.382631851s
	I1017 19:43:11.952672  769029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438547
	I1017 19:43:11.976386  769029 ssh_runner.go:195] Run: cat /version.json
	I1017 19:43:11.976491  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:11.976509  769029 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:43:11.976664  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:12.005417  769029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/newest-cni-438547/id_rsa Username:docker}
	I1017 19:43:12.007192  769029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/newest-cni-438547/id_rsa Username:docker}
	I1017 19:43:12.120622  769029 ssh_runner.go:195] Run: systemctl --version
	I1017 19:43:12.218267  769029 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:43:12.270948  769029 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:43:12.277770  769029 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:43:12.277845  769029 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:43:12.290627  769029 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:43:12.290657  769029 start.go:495] detecting cgroup driver to use...
	I1017 19:43:12.290715  769029 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 19:43:12.290778  769029 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:43:12.313299  769029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:43:12.333776  769029 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:43:12.333836  769029 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:43:12.354902  769029 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:43:12.373087  769029 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:43:12.488958  769029 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:43:12.597251  769029 docker.go:234] disabling docker service ...
	I1017 19:43:12.597309  769029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:43:12.614620  769029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:43:12.630483  769029 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:43:12.733453  769029 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:43:12.829488  769029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:43:12.844553  769029 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:43:12.862001  769029 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:43:12.862081  769029 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:43:12.872938  769029 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 19:43:12.873024  769029 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:43:12.884478  769029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:43:12.896521  769029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:43:12.907132  769029 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:43:12.917745  769029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:43:12.929051  769029 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:43:12.941179  769029 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:43:12.952369  769029 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:43:12.961822  769029 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:43:12.971004  769029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:43:13.068142  769029 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:43:13.545054  769029 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:43:13.545193  769029 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:43:13.550192  769029 start.go:563] Will wait 60s for crictl version
	I1017 19:43:13.550270  769029 ssh_runner.go:195] Run: which crictl
	I1017 19:43:13.555176  769029 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:43:13.584883  769029 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:43:13.584952  769029 ssh_runner.go:195] Run: crio --version
	I1017 19:43:13.616969  769029 ssh_runner.go:195] Run: crio --version
	I1017 19:43:13.654295  769029 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:43:13.655638  769029 cli_runner.go:164] Run: docker network inspect newest-cni-438547 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:43:13.676350  769029 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1017 19:43:13.681804  769029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:43:13.696745  769029 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1017 19:43:13.697943  769029 kubeadm.go:883] updating cluster {Name:newest-cni-438547 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:43:13.698117  769029 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:43:13.698203  769029 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:43:13.735573  769029 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:43:13.735603  769029 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:43:13.735662  769029 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:43:13.767068  769029 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:43:13.767092  769029 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:43:13.767107  769029 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1017 19:43:13.767210  769029 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-438547 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:43:13.767275  769029 ssh_runner.go:195] Run: crio config
	I1017 19:43:13.823554  769029 cni.go:84] Creating CNI manager for ""
	I1017 19:43:13.823582  769029 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:43:13.823607  769029 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1017 19:43:13.823638  769029 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-438547 NodeName:newest-cni-438547 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:43:13.823829  769029 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-438547"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:43:13.823912  769029 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:43:13.834125  769029 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:43:13.834224  769029 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 19:43:13.846415  769029 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1017 19:43:13.864709  769029 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:43:13.882057  769029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1017 19:43:13.899973  769029 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1017 19:43:13.905150  769029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:43:13.919261  769029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:43:14.044492  769029 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:43:14.071853  769029 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/newest-cni-438547 for IP: 192.168.103.2
	I1017 19:43:14.071878  769029 certs.go:195] generating shared ca certs ...
	I1017 19:43:14.071900  769029 certs.go:227] acquiring lock for ca certs: {Name:mkc97483d62151ba5c32d923dd19e3e2b3661468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:43:14.072080  769029 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key
	I1017 19:43:14.072150  769029 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key
	I1017 19:43:14.072161  769029 certs.go:257] generating profile certs ...
	I1017 19:43:14.072402  769029 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/newest-cni-438547/client.key
	I1017 19:43:14.072487  769029 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/newest-cni-438547/apiserver.key.df6baa7a
	I1017 19:43:14.072531  769029 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/newest-cni-438547/proxy-client.key
	I1017 19:43:14.072666  769029 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725.pem (1338 bytes)
	W1017 19:43:14.072771  769029 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725_empty.pem, impossibly tiny 0 bytes
	I1017 19:43:14.072797  769029 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 19:43:14.072845  769029 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem (1078 bytes)
	I1017 19:43:14.072877  769029 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:43:14.072903  769029 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem (1679 bytes)
	I1017 19:43:14.073021  769029 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem (1708 bytes)
	I1017 19:43:14.074186  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:43:14.100888  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:43:14.126129  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:43:14.149921  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 19:43:14.183666  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/newest-cni-438547/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 19:43:14.208296  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/newest-cni-438547/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:43:14.233165  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/newest-cni-438547/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:43:14.259177  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/newest-cni-438547/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:43:14.282988  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem --> /usr/share/ca-certificates/4957252.pem (1708 bytes)
	I1017 19:43:14.307613  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:43:14.331583  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725.pem --> /usr/share/ca-certificates/495725.pem (1338 bytes)
	I1017 19:43:14.357401  769029 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:43:14.373675  769029 ssh_runner.go:195] Run: openssl version
	I1017 19:43:14.381023  769029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4957252.pem && ln -fs /usr/share/ca-certificates/4957252.pem /etc/ssl/certs/4957252.pem"
	I1017 19:43:14.391990  769029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4957252.pem
	I1017 19:43:14.396328  769029 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/4957252.pem
	I1017 19:43:14.396407  769029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4957252.pem
	I1017 19:43:14.439212  769029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4957252.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:43:14.451347  769029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:43:14.462548  769029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:43:14.468226  769029 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:43:14.468277  769029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:43:14.508308  769029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:43:14.519832  769029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/495725.pem && ln -fs /usr/share/ca-certificates/495725.pem /etc/ssl/certs/495725.pem"
	I1017 19:43:14.530589  769029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/495725.pem
	I1017 19:43:14.536202  769029 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/495725.pem
	I1017 19:43:14.536272  769029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/495725.pem
	I1017 19:43:14.584197  769029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/495725.pem /etc/ssl/certs/51391683.0"
	I1017 19:43:14.593801  769029 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:43:14.598812  769029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:43:14.642012  769029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:43:14.689390  769029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:43:14.750185  769029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:43:14.810787  769029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:43:14.867293  769029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:43:14.923926  769029 kubeadm.go:400] StartCluster: {Name:newest-cni-438547 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:43:14.924114  769029 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:43:14.924253  769029 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:43:14.968342  769029 cri.go:89] found id: "783ad2b5346ea181a472270de81a22e9136094d7a4a6901197f9b3b4dd831dd6"
	I1017 19:43:14.968389  769029 cri.go:89] found id: "fba4a1410021bdf673cba310189091795eb97198d5419e4df6a5ea9b8ceea611"
	I1017 19:43:14.968396  769029 cri.go:89] found id: "2e544eb21d59ec702243e34c0c9957da878518767a5d668acdbf48ab0caa8515"
	I1017 19:43:14.968402  769029 cri.go:89] found id: "8140e5435bac0f77a7bf313d441166129425c73e3e1d7fabfc13834d3cfa44bd"
	I1017 19:43:14.968408  769029 cri.go:89] found id: ""
	I1017 19:43:14.968456  769029 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 19:43:14.985899  769029 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:43:14Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:43:14.985989  769029 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 19:43:14.997604  769029 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 19:43:14.997638  769029 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 19:43:14.997713  769029 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 19:43:15.013664  769029 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:43:15.016226  769029 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-438547" does not appear in /home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:43:15.017235  769029 kubeconfig.go:62] /home/jenkins/minikube-integration/21753-492109/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-438547" cluster setting kubeconfig missing "newest-cni-438547" context setting]
	I1017 19:43:15.019314  769029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/kubeconfig: {Name:mkc99c1a086f83f30612e2820a6063c20b9217b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:43:15.021434  769029 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 19:43:15.032992  769029 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1017 19:43:15.033043  769029 kubeadm.go:601] duration metric: took 35.397345ms to restartPrimaryControlPlane
	I1017 19:43:15.033057  769029 kubeadm.go:402] duration metric: took 109.148342ms to StartCluster
	I1017 19:43:15.033082  769029 settings.go:142] acquiring lock: {Name:mkb8ebc6edbbb6915dd74086f502bcc2721555a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:43:15.033205  769029 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:43:15.035434  769029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/kubeconfig: {Name:mkc99c1a086f83f30612e2820a6063c20b9217b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:43:15.035742  769029 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:43:15.035914  769029 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 19:43:15.036025  769029 config.go:182] Loaded profile config "newest-cni-438547": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:43:15.036040  769029 addons.go:69] Setting dashboard=true in profile "newest-cni-438547"
	I1017 19:43:15.036056  769029 addons.go:238] Setting addon dashboard=true in "newest-cni-438547"
	W1017 19:43:15.036073  769029 addons.go:247] addon dashboard should already be in state true
	I1017 19:43:15.036029  769029 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-438547"
	I1017 19:43:15.036088  769029 addons.go:69] Setting default-storageclass=true in profile "newest-cni-438547"
	I1017 19:43:15.036112  769029 host.go:66] Checking if "newest-cni-438547" exists ...
	I1017 19:43:15.036123  769029 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-438547"
	I1017 19:43:15.036094  769029 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-438547"
	W1017 19:43:15.036235  769029 addons.go:247] addon storage-provisioner should already be in state true
	I1017 19:43:15.036258  769029 host.go:66] Checking if "newest-cni-438547" exists ...
	I1017 19:43:15.036502  769029 cli_runner.go:164] Run: docker container inspect newest-cni-438547 --format={{.State.Status}}
	I1017 19:43:15.036644  769029 cli_runner.go:164] Run: docker container inspect newest-cni-438547 --format={{.State.Status}}
	I1017 19:43:15.036930  769029 cli_runner.go:164] Run: docker container inspect newest-cni-438547 --format={{.State.Status}}
	I1017 19:43:15.042278  769029 out.go:179] * Verifying Kubernetes components...
	I1017 19:43:15.047340  769029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:43:15.064873  769029 addons.go:238] Setting addon default-storageclass=true in "newest-cni-438547"
	W1017 19:43:15.064899  769029 addons.go:247] addon default-storageclass should already be in state true
	I1017 19:43:15.064930  769029 host.go:66] Checking if "newest-cni-438547" exists ...
	I1017 19:43:15.065420  769029 cli_runner.go:164] Run: docker container inspect newest-cni-438547 --format={{.State.Status}}
	I1017 19:43:15.072536  769029 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1017 19:43:15.072614  769029 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 19:43:15.074636  769029 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:43:15.075211  769029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 19:43:15.075318  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:15.075181  769029 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1017 19:43:10.969605  766638 pod_ready.go:104] pod "coredns-66bc5c9577-vckxk" is not "Ready", error: <nil>
	W1017 19:43:13.465361  766638 pod_ready.go:104] pod "coredns-66bc5c9577-vckxk" is not "Ready", error: <nil>
	W1017 19:43:15.467307  766638 pod_ready.go:104] pod "coredns-66bc5c9577-vckxk" is not "Ready", error: <nil>
	I1017 19:43:15.076801  769029 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1017 19:43:15.076850  769029 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1017 19:43:15.076936  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:15.096916  769029 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 19:43:15.096946  769029 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 19:43:15.097026  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:15.114759  769029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/newest-cni-438547/id_rsa Username:docker}
	I1017 19:43:15.116989  769029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/newest-cni-438547/id_rsa Username:docker}
	I1017 19:43:15.130146  769029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/newest-cni-438547/id_rsa Username:docker}
	I1017 19:43:15.224366  769029 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:43:15.246009  769029 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:43:15.246144  769029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:43:15.247206  769029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:43:15.249128  769029 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1017 19:43:15.249146  769029 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1017 19:43:15.269436  769029 api_server.go:72] duration metric: took 233.647317ms to wait for apiserver process to appear ...
	I1017 19:43:15.269471  769029 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:43:15.269495  769029 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 19:43:15.270227  769029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 19:43:15.279303  769029 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1017 19:43:15.279332  769029 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1017 19:43:15.306530  769029 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1017 19:43:15.306564  769029 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1017 19:43:15.326750  769029 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1017 19:43:15.326781  769029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1017 19:43:15.352386  769029 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1017 19:43:15.352417  769029 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1017 19:43:15.374346  769029 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1017 19:43:15.374388  769029 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1017 19:43:15.391299  769029 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1017 19:43:15.391339  769029 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1017 19:43:15.408563  769029 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1017 19:43:15.408603  769029 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1017 19:43:15.427279  769029 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 19:43:15.427308  769029 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1017 19:43:15.445397  769029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 19:43:12.114082  760682 addons.go:514] duration metric: took 835.040796ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1017 19:43:12.372372  760682 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-448344" context rescaled to 1 replicas
	W1017 19:43:13.560661  760682 node_ready.go:57] node "auto-448344" has "Ready":"False" status (will retry)
	W1017 19:43:15.561577  760682 node_ready.go:57] node "auto-448344" has "Ready":"False" status (will retry)
	I1017 19:43:16.925552  769029 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1017 19:43:16.925589  769029 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1017 19:43:16.925608  769029 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 19:43:16.934120  769029 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1017 19:43:16.934153  769029 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1017 19:43:17.270534  769029 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 19:43:17.275240  769029 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 19:43:17.275272  769029 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 19:43:17.516462  769029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.269195174s)
	I1017 19:43:17.516481  769029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.24621924s)
	I1017 19:43:17.516592  769029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.071155849s)
	I1017 19:43:17.518432  769029 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-438547 addons enable metrics-server
	
	I1017 19:43:17.530597  769029 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1017 19:43:13.800622  761258 pod_ready.go:104] pod "coredns-66bc5c9577-frpnt" is not "Ready", error: <nil>
	W1017 19:43:15.802200  761258 pod_ready.go:104] pod "coredns-66bc5c9577-frpnt" is not "Ready", error: <nil>
	I1017 19:43:17.532145  769029 addons.go:514] duration metric: took 2.496242951s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1017 19:43:17.769598  769029 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 19:43:17.773963  769029 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 19:43:17.773989  769029 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 19:43:18.270170  769029 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 19:43:18.275117  769029 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1017 19:43:18.276322  769029 api_server.go:141] control plane version: v1.34.1
	I1017 19:43:18.276364  769029 api_server.go:131] duration metric: took 3.006886118s to wait for apiserver health ...
	I1017 19:43:18.276374  769029 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:43:18.279789  769029 system_pods.go:59] 8 kube-system pods found
	I1017 19:43:18.279825  769029 system_pods.go:61] "coredns-66bc5c9577-8pfhn" [6d0a8a45-e3f8-4e59-b735-4f1236cf5953] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 19:43:18.279837  769029 system_pods.go:61] "etcd-newest-cni-438547" [aaf7399b-5274-44fa-a929-a515b9341276] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:43:18.279846  769029 system_pods.go:61] "kindnet-nhg7f" [368f40c9-2ab9-4d9d-9310-950d3371f4c0] Running
	I1017 19:43:18.279868  769029 system_pods.go:61] "kube-apiserver-newest-cni-438547" [25c05b7c-518e-4bc1-94cc-e2a8a04f104b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:43:18.279882  769029 system_pods.go:61] "kube-controller-manager-newest-cni-438547" [eba5d490-129b-4739-95bd-e10a4fd73c40] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:43:18.279890  769029 system_pods.go:61] "kube-proxy-zfk4z" [a38161c3-4097-4e85-b391-e3b730dd90b6] Running
	I1017 19:43:18.279898  769029 system_pods.go:61] "kube-scheduler-newest-cni-438547" [8210e114-0804-429b-8518-30042567db4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:43:18.279907  769029 system_pods.go:61] "storage-provisioner" [39d961dc-a8fd-4066-b46e-3e02ec6d04f6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 19:43:18.279914  769029 system_pods.go:74] duration metric: took 3.534199ms to wait for pod list to return data ...
	I1017 19:43:18.279928  769029 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:43:18.283213  769029 default_sa.go:45] found service account: "default"
	I1017 19:43:18.283237  769029 default_sa.go:55] duration metric: took 3.295545ms for default service account to be created ...
	I1017 19:43:18.283249  769029 kubeadm.go:586] duration metric: took 3.247469876s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 19:43:18.283267  769029 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:43:18.285877  769029 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 19:43:18.285901  769029 node_conditions.go:123] node cpu capacity is 8
	I1017 19:43:18.285916  769029 node_conditions.go:105] duration metric: took 2.645649ms to run NodePressure ...
	I1017 19:43:18.285928  769029 start.go:241] waiting for startup goroutines ...
	I1017 19:43:18.285935  769029 start.go:246] waiting for cluster config update ...
	I1017 19:43:18.285945  769029 start.go:255] writing updated cluster config ...
	I1017 19:43:18.286199  769029 ssh_runner.go:195] Run: rm -f paused
	I1017 19:43:18.338456  769029 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 19:43:18.341423  769029 out.go:179] * Done! kubectl is now configured to use "newest-cni-438547" cluster and "default" namespace by default
	W1017 19:43:17.971240  766638 pod_ready.go:104] pod "coredns-66bc5c9577-vckxk" is not "Ready", error: <nil>
	W1017 19:43:20.465407  766638 pod_ready.go:104] pod "coredns-66bc5c9577-vckxk" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.464469277Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.467936927Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=8ff39a9b-d3de-4391-b3fc-81d186b29d5d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.468606879Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c371cd10-1b01-455c-adcf-ed6315723d67 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.469718808Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.470156996Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.470590175Z" level=info msg="Ran pod sandbox c138f9334b126d7fd0e6a9c1b4678a36e2633d5363e264dba2b10d7c849be6d3 with infra container: kube-system/kube-proxy-zfk4z/POD" id=8ff39a9b-d3de-4391-b3fc-81d186b29d5d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.47080178Z" level=info msg="Ran pod sandbox adba52d0b36216dee9586ae2b99c77d48cfd4cd9bb88efb673ef24ae01166c50 with infra container: kube-system/kindnet-nhg7f/POD" id=c371cd10-1b01-455c-adcf-ed6315723d67 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.471989759Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=589b3a6f-5dd4-4f5a-96b3-a63d46819a52 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.472027367Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f2e6e3d2-2d2f-4511-989a-b3ea56e2f184 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.473048303Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=6c9805a3-fd23-4c57-b2fe-87147f0b42ef name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.473085075Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=7f7212b0-846e-40a7-9576-201fabbccc67 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.474181492Z" level=info msg="Creating container: kube-system/kube-proxy-zfk4z/kube-proxy" id=855d4096-0b03-4357-8d2a-71c40282b3b3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.474325864Z" level=info msg="Creating container: kube-system/kindnet-nhg7f/kindnet-cni" id=6d530d34-0eec-42a2-936b-5ea5dd6ca7e5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.474451619Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.474517993Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.478921513Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.479583435Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.481784614Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.482403469Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.51265261Z" level=info msg="Created container 396b79a83b6aad7be20450af8a558a28d65313c75e489cd893f4c91b119849ac: kube-system/kindnet-nhg7f/kindnet-cni" id=6d530d34-0eec-42a2-936b-5ea5dd6ca7e5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.513442457Z" level=info msg="Starting container: 396b79a83b6aad7be20450af8a558a28d65313c75e489cd893f4c91b119849ac" id=5800c0bf-da97-4e9c-aec3-6666c90e2b80 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.514894337Z" level=info msg="Created container 5cb60d7d09aab27fd00d9f02862df4772fb81077b845a75cc383b5fcfabe2bef: kube-system/kube-proxy-zfk4z/kube-proxy" id=855d4096-0b03-4357-8d2a-71c40282b3b3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.515557792Z" level=info msg="Started container" PID=1035 containerID=396b79a83b6aad7be20450af8a558a28d65313c75e489cd893f4c91b119849ac description=kube-system/kindnet-nhg7f/kindnet-cni id=5800c0bf-da97-4e9c-aec3-6666c90e2b80 name=/runtime.v1.RuntimeService/StartContainer sandboxID=adba52d0b36216dee9586ae2b99c77d48cfd4cd9bb88efb673ef24ae01166c50
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.515647399Z" level=info msg="Starting container: 5cb60d7d09aab27fd00d9f02862df4772fb81077b845a75cc383b5fcfabe2bef" id=e0efa087-9f2b-4327-89d4-112000764640 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.519164444Z" level=info msg="Started container" PID=1036 containerID=5cb60d7d09aab27fd00d9f02862df4772fb81077b845a75cc383b5fcfabe2bef description=kube-system/kube-proxy-zfk4z/kube-proxy id=e0efa087-9f2b-4327-89d4-112000764640 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c138f9334b126d7fd0e6a9c1b4678a36e2633d5363e264dba2b10d7c849be6d3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	5cb60d7d09aab       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   c138f9334b126       kube-proxy-zfk4z                            kube-system
	396b79a83b6aa       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   adba52d0b3621       kindnet-nhg7f                               kube-system
	783ad2b5346ea       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   fc46ff75d1185       etcd-newest-cni-438547                      kube-system
	fba4a1410021b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   2e66efd6c4f83       kube-scheduler-newest-cni-438547            kube-system
	2e544eb21d59e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   fcfe720e63430       kube-controller-manager-newest-cni-438547   kube-system
	8140e5435bac0       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   999d8b403501a       kube-apiserver-newest-cni-438547            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-438547
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-438547
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=newest-cni-438547
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_42_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:42:35 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-438547
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:43:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:43:17 +0000   Fri, 17 Oct 2025 19:42:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:43:17 +0000   Fri, 17 Oct 2025 19:42:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:43:17 +0000   Fri, 17 Oct 2025 19:42:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 17 Oct 2025 19:43:17 +0000   Fri, 17 Oct 2025 19:42:32 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-438547
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                6f16ffd1-311d-4f27-b795-37ce231ef7a2
	  Boot ID:                    c8616e78-d085-41cd-a329-f2bcfd9cfa15
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-438547                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         46s
	  kube-system                 kindnet-nhg7f                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      39s
	  kube-system                 kube-apiserver-newest-cni-438547             250m (3%)     0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-controller-manager-newest-cni-438547    200m (2%)     0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-proxy-zfk4z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-scheduler-newest-cni-438547             100m (1%)     0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 37s                kube-proxy       
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  Starting                 50s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  50s (x8 over 50s)  kubelet          Node newest-cni-438547 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x8 over 50s)  kubelet          Node newest-cni-438547 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x8 over 50s)  kubelet          Node newest-cni-438547 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    45s                kubelet          Node newest-cni-438547 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  45s                kubelet          Node newest-cni-438547 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     45s                kubelet          Node newest-cni-438547 status is now: NodeHasSufficientPID
	  Normal  Starting                 45s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           40s                node-controller  Node newest-cni-438547 event: Registered Node newest-cni-438547 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x7 over 8s)    kubelet          Node newest-cni-438547 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x6 over 8s)    kubelet          Node newest-cni-438547 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x6 over 8s)    kubelet          Node newest-cni-438547 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2s                 node-controller  Node newest-cni-438547 event: Registered Node newest-cni-438547 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022229] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023876] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.024898] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +2.047801] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +4.031525] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:00] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +16.382262] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +32.252567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee e4 05 02 02 de 08 06
	[  +0.011274] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 4e 5e a6 cc 79 08 06
	
	
	==> etcd [783ad2b5346ea181a472270de81a22e9136094d7a4a6901197f9b3b4dd831dd6] <==
	{"level":"warn","ts":"2025-10-17T19:43:16.060572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.072380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.082804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.094925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.102875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.112083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.120277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.128369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.137329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.145919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.154155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.162983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.172417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.182975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.192048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.200879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.209172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.219407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.227135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.236111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.244964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.261960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.270593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.279863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.353198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42956","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:43:22 up  3:25,  0 user,  load average: 3.39, 3.34, 2.23
	Linux newest-cni-438547 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [396b79a83b6aad7be20450af8a558a28d65313c75e489cd893f4c91b119849ac] <==
	I1017 19:43:17.740482       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 19:43:17.740774       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1017 19:43:17.740936       1 main.go:148] setting mtu 1500 for CNI 
	I1017 19:43:17.740958       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 19:43:17.740988       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T19:43:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 19:43:17.939463       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 19:43:17.939481       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 19:43:17.939488       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:43:17.939590       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 19:43:18.339576       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:43:18.339631       1 metrics.go:72] Registering metrics
	I1017 19:43:18.339732       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [8140e5435bac0f77a7bf313d441166129425c73e3e1d7fabfc13834d3cfa44bd] <==
	I1017 19:43:16.992809       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1017 19:43:16.992898       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 19:43:16.992963       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 19:43:16.993277       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 19:43:16.994158       1 aggregator.go:171] initial CRD sync complete...
	I1017 19:43:16.994186       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 19:43:16.994192       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 19:43:16.994199       1 cache.go:39] Caches are synced for autoregister controller
	I1017 19:43:16.994984       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 19:43:16.995079       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 19:43:17.002839       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1017 19:43:17.003275       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 19:43:17.011488       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 19:43:17.013022       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:43:17.256261       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 19:43:17.312054       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 19:43:17.346499       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 19:43:17.368421       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 19:43:17.377643       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 19:43:17.419200       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.152.206"}
	I1017 19:43:17.429632       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.105.28"}
	I1017 19:43:17.894757       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:43:20.584999       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:43:20.634541       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 19:43:20.834647       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2e544eb21d59ec702243e34c0c9957da878518767a5d668acdbf48ab0caa8515] <==
	I1017 19:43:20.269187       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 19:43:20.270310       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 19:43:20.275583       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 19:43:20.277847       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 19:43:20.280895       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 19:43:20.281802       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 19:43:20.281819       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 19:43:20.281839       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 19:43:20.281849       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 19:43:20.281874       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 19:43:20.281913       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 19:43:20.281910       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 19:43:20.282019       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-438547"
	I1017 19:43:20.282084       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1017 19:43:20.282091       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 19:43:20.286776       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:43:20.293612       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 19:43:20.293664       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 19:43:20.293710       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 19:43:20.293721       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 19:43:20.293728       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 19:43:20.331008       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:43:20.331035       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 19:43:20.331043       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 19:43:20.343376       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [5cb60d7d09aab27fd00d9f02862df4772fb81077b845a75cc383b5fcfabe2bef] <==
	I1017 19:43:17.559718       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:43:17.616622       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:43:17.717417       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:43:17.717455       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1017 19:43:17.717531       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:43:17.736274       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:43:17.736328       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:43:17.741868       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:43:17.742788       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:43:17.742822       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:43:17.744909       1 config.go:200] "Starting service config controller"
	I1017 19:43:17.744932       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:43:17.744939       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:43:17.744956       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:43:17.744975       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:43:17.744989       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:43:17.745019       1 config.go:309] "Starting node config controller"
	I1017 19:43:17.745031       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:43:17.745037       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:43:17.845856       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:43:17.845960       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:43:17.846060       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fba4a1410021bdf673cba310189091795eb97198d5419e4df6a5ea9b8ceea611] <==
	I1017 19:43:15.537612       1 serving.go:386] Generated self-signed cert in-memory
	W1017 19:43:16.939762       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 19:43:16.939827       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 19:43:16.939841       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 19:43:16.939849       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 19:43:16.970644       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 19:43:16.970768       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:43:16.976739       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:43:16.977275       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 19:43:16.977304       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:43:16.977428       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 19:43:17.077527       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:43:16 newest-cni-438547 kubelet[661]: E1017 19:43:16.215238     661 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-438547\" not found" node="newest-cni-438547"
	Oct 17 19:43:16 newest-cni-438547 kubelet[661]: E1017 19:43:16.215430     661 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-438547\" not found" node="newest-cni-438547"
	Oct 17 19:43:16 newest-cni-438547 kubelet[661]: E1017 19:43:16.216841     661 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-438547\" not found" node="newest-cni-438547"
	Oct 17 19:43:16 newest-cni-438547 kubelet[661]: I1017 19:43:16.961765     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-438547"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.023948     661 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-438547"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.024065     661 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-438547"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.024106     661 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.025655     661 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: E1017 19:43:17.081611     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-438547\" already exists" pod="kube-system/kube-controller-manager-newest-cni-438547"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.081655     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-438547"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: E1017 19:43:17.087166     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-438547\" already exists" pod="kube-system/kube-scheduler-newest-cni-438547"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.087208     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-438547"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: E1017 19:43:17.093834     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-438547\" already exists" pod="kube-system/etcd-newest-cni-438547"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.093868     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-438547"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: E1017 19:43:17.101137     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-438547\" already exists" pod="kube-system/kube-apiserver-newest-cni-438547"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.155881     661 apiserver.go:52] "Watching apiserver"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.161061     661 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.252985     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/368f40c9-2ab9-4d9d-9310-950d3371f4c0-lib-modules\") pod \"kindnet-nhg7f\" (UID: \"368f40c9-2ab9-4d9d-9310-950d3371f4c0\") " pod="kube-system/kindnet-nhg7f"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.253052     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a38161c3-4097-4e85-b391-e3b730dd90b6-xtables-lock\") pod \"kube-proxy-zfk4z\" (UID: \"a38161c3-4097-4e85-b391-e3b730dd90b6\") " pod="kube-system/kube-proxy-zfk4z"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.253082     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a38161c3-4097-4e85-b391-e3b730dd90b6-lib-modules\") pod \"kube-proxy-zfk4z\" (UID: \"a38161c3-4097-4e85-b391-e3b730dd90b6\") " pod="kube-system/kube-proxy-zfk4z"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.253437     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/368f40c9-2ab9-4d9d-9310-950d3371f4c0-cni-cfg\") pod \"kindnet-nhg7f\" (UID: \"368f40c9-2ab9-4d9d-9310-950d3371f4c0\") " pod="kube-system/kindnet-nhg7f"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.253477     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/368f40c9-2ab9-4d9d-9310-950d3371f4c0-xtables-lock\") pod \"kindnet-nhg7f\" (UID: \"368f40c9-2ab9-4d9d-9310-950d3371f4c0\") " pod="kube-system/kindnet-nhg7f"
	Oct 17 19:43:19 newest-cni-438547 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 19:43:19 newest-cni-438547 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 19:43:19 newest-cni-438547 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-438547 -n newest-cni-438547
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-438547 -n newest-cni-438547: exit status 2 (342.702536ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-438547 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-8pfhn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tsn4q kubernetes-dashboard-855c9754f9-kx9tq
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-438547 describe pod coredns-66bc5c9577-8pfhn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tsn4q kubernetes-dashboard-855c9754f9-kx9tq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-438547 describe pod coredns-66bc5c9577-8pfhn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tsn4q kubernetes-dashboard-855c9754f9-kx9tq: exit status 1 (67.691503ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-8pfhn" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-tsn4q" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-kx9tq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-438547 describe pod coredns-66bc5c9577-8pfhn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tsn4q kubernetes-dashboard-855c9754f9-kx9tq: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-438547
helpers_test.go:243: (dbg) docker inspect newest-cni-438547:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "54bdc696aaf8b8a7838ac3f4a2b8a4d824bac93d2c21012a82e85fba78b1887a",
	        "Created": "2025-10-17T19:42:20.078531564Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 769397,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:43:06.714154888Z",
	            "FinishedAt": "2025-10-17T19:43:05.504987922Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/54bdc696aaf8b8a7838ac3f4a2b8a4d824bac93d2c21012a82e85fba78b1887a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/54bdc696aaf8b8a7838ac3f4a2b8a4d824bac93d2c21012a82e85fba78b1887a/hostname",
	        "HostsPath": "/var/lib/docker/containers/54bdc696aaf8b8a7838ac3f4a2b8a4d824bac93d2c21012a82e85fba78b1887a/hosts",
	        "LogPath": "/var/lib/docker/containers/54bdc696aaf8b8a7838ac3f4a2b8a4d824bac93d2c21012a82e85fba78b1887a/54bdc696aaf8b8a7838ac3f4a2b8a4d824bac93d2c21012a82e85fba78b1887a-json.log",
	        "Name": "/newest-cni-438547",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-438547:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-438547",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "54bdc696aaf8b8a7838ac3f4a2b8a4d824bac93d2c21012a82e85fba78b1887a",
	                "LowerDir": "/var/lib/docker/overlay2/b72f80e5d1663080bbedf26b788b7f64c463dad3e253926c9453e0666b33a8a4-init/diff:/var/lib/docker/overlay2/dbfb6a42e05d15debefb7c829b0dbabbe558b70da40f1ab4f30d27e0dda96088/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b72f80e5d1663080bbedf26b788b7f64c463dad3e253926c9453e0666b33a8a4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b72f80e5d1663080bbedf26b788b7f64c463dad3e253926c9453e0666b33a8a4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b72f80e5d1663080bbedf26b788b7f64c463dad3e253926c9453e0666b33a8a4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-438547",
	                "Source": "/var/lib/docker/volumes/newest-cni-438547/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-438547",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-438547",
	                "name.minikube.sigs.k8s.io": "newest-cni-438547",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0dd8aa4c2675f15f3781d0eecbf6f70953ab9d3b361340373f28f3590cc132a9",
	            "SandboxKey": "/var/run/docker/netns/0dd8aa4c2675",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33478"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33479"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33482"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33480"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33481"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-438547": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:bb:8e:98:d6:41",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "77fe0b660d34aea0508d43e4b8b59b631dd8d785f42a3fec7199378905db0191",
	                    "EndpointID": "37fb774b74d52c6730b0bf79698c19a63992086e370b75c3a3dedb7e9fd56598",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-438547",
	                        "54bdc696aaf8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-438547 -n newest-cni-438547
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-438547 -n newest-cni-438547: exit status 2 (348.400792ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-438547 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-438547 logs -n 25: (1.00380927s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-112878 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:41 UTC │ 17 Oct 25 19:42 UTC │
	│ image   │ no-preload-171807 image list --format=json                                                                                                                                                                                                    │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ pause   │ -p no-preload-171807 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ delete  │ -p no-preload-171807                                                                                                                                                                                                                          │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ delete  │ -p no-preload-171807                                                                                                                                                                                                                          │ no-preload-171807            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ start   │ -p newest-cni-438547 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438547            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ start   │ -p kubernetes-upgrade-137244 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-137244    │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ start   │ -p kubernetes-upgrade-137244 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-137244    │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ image   │ embed-certs-599709 image list --format=json                                                                                                                                                                                                   │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ pause   │ -p embed-certs-599709 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-112878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-112878 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ delete  │ -p kubernetes-upgrade-137244                                                                                                                                                                                                                  │ kubernetes-upgrade-137244    │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ delete  │ -p embed-certs-599709                                                                                                                                                                                                                         │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ start   │ -p auto-448344 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ delete  │ -p embed-certs-599709                                                                                                                                                                                                                         │ embed-certs-599709           │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ start   │ -p enable-default-cni-448344 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio                                                                               │ enable-default-cni-448344    │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-438547 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-438547            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ stop    │ -p newest-cni-438547 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-438547            │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:43 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-112878 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │ 17 Oct 25 19:42 UTC │
	│ start   │ -p default-k8s-diff-port-112878 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:42 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-438547 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-438547            │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ start   │ -p newest-cni-438547 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438547            │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ image   │ newest-cni-438547 image list --format=json                                                                                                                                                                                                    │ newest-cni-438547            │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ pause   │ -p newest-cni-438547 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-438547            │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:43:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:43:06.319972  769029 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:43:06.322927  769029 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:43:06.322950  769029 out.go:374] Setting ErrFile to fd 2...
	I1017 19:43:06.322959  769029 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:43:06.323496  769029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:43:06.324497  769029 out.go:368] Setting JSON to false
	I1017 19:43:06.326615  769029 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12325,"bootTime":1760717861,"procs":338,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:43:06.326843  769029 start.go:141] virtualization: kvm guest
	I1017 19:43:06.329640  769029 out.go:179] * [newest-cni-438547] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:43:06.332298  769029 notify.go:220] Checking for updates...
	I1017 19:43:06.332377  769029 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:43:06.333918  769029 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:43:06.335499  769029 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:43:06.337621  769029 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 19:43:06.338878  769029 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:43:06.341038  769029 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:43:05.599158  766638 node_ready.go:49] node "default-k8s-diff-port-112878" is "Ready"
	I1017 19:43:05.599201  766638 node_ready.go:38] duration metric: took 1.614895769s for node "default-k8s-diff-port-112878" to be "Ready" ...
	I1017 19:43:05.599222  766638 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:43:05.599276  766638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:43:06.409938  766638 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.422807633s)
	I1017 19:43:06.410181  766638 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.392451555s)
	I1017 19:43:06.410315  766638 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.223990594s)
	I1017 19:43:06.410346  766638 api_server.go:72] duration metric: took 2.641009673s to wait for apiserver process to appear ...
	I1017 19:43:06.410362  766638 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:43:06.410382  766638 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1017 19:43:06.411583  766638 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-112878 addons enable metrics-server
	
	I1017 19:43:06.421116  766638 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 19:43:06.421169  766638 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 19:43:06.430215  766638 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1017 19:43:06.343391  769029 config.go:182] Loaded profile config "newest-cni-438547": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:43:06.344195  769029 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:43:06.388187  769029 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:43:06.388395  769029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:43:06.465305  769029 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-17 19:43:06.454372433 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:43:06.465427  769029 docker.go:318] overlay module found
	I1017 19:43:06.467425  769029 out.go:179] * Using the docker driver based on existing profile
	I1017 19:43:06.468742  769029 start.go:305] selected driver: docker
	I1017 19:43:06.468762  769029 start.go:925] validating driver "docker" against &{Name:newest-cni-438547 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438547 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:43:06.468874  769029 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:43:06.469639  769029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:43:06.535884  769029 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-17 19:43:06.523397698 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:43:06.536253  769029 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 19:43:06.536287  769029 cni.go:84] Creating CNI manager for ""
	I1017 19:43:06.536352  769029 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:43:06.536424  769029 start.go:349] cluster config:
	{Name:newest-cni-438547 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:43:06.539967  769029 out.go:179] * Starting "newest-cni-438547" primary control-plane node in "newest-cni-438547" cluster
	I1017 19:43:06.541332  769029 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:43:06.543215  769029 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:43:06.544450  769029 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:43:06.544507  769029 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:43:06.544519  769029 cache.go:58] Caching tarball of preloaded images
	I1017 19:43:06.544568  769029 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:43:06.544635  769029 preload.go:233] Found /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:43:06.544652  769029 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:43:06.544804  769029 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/newest-cni-438547/config.json ...
	I1017 19:43:06.569782  769029 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:43:06.569806  769029 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:43:06.569828  769029 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:43:06.569861  769029 start.go:360] acquireMachinesLock for newest-cni-438547: {Name:mkf0920afa8583ecdc28963ff3ff9f81a225f71e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:43:06.569949  769029 start.go:364] duration metric: took 50.025µs to acquireMachinesLock for "newest-cni-438547"
	I1017 19:43:06.569980  769029 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:43:06.569988  769029 fix.go:54] fixHost starting: 
	I1017 19:43:06.570298  769029 cli_runner.go:164] Run: docker container inspect newest-cni-438547 --format={{.State.Status}}
	I1017 19:43:06.592361  769029 fix.go:112] recreateIfNeeded on newest-cni-438547: state=Stopped err=<nil>
	W1017 19:43:06.592405  769029 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:43:05.372229  760682 out.go:252]   - Configuring RBAC rules ...
	I1017 19:43:05.372414  760682 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 19:43:05.376948  760682 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 19:43:05.382790  760682 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 19:43:05.386960  760682 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 19:43:05.390167  760682 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 19:43:05.396957  760682 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 19:43:05.721175  760682 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 19:43:06.187601  760682 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 19:43:06.721932  760682 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 19:43:06.722778  760682 kubeadm.go:318] 
	I1017 19:43:06.722963  760682 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 19:43:06.722996  760682 kubeadm.go:318] 
	I1017 19:43:06.723171  760682 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 19:43:06.723205  760682 kubeadm.go:318] 
	I1017 19:43:06.723260  760682 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 19:43:06.723340  760682 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 19:43:06.723434  760682 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 19:43:06.723485  760682 kubeadm.go:318] 
	I1017 19:43:06.723560  760682 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 19:43:06.723790  760682 kubeadm.go:318] 
	I1017 19:43:06.723858  760682 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 19:43:06.723864  760682 kubeadm.go:318] 
	I1017 19:43:06.723922  760682 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 19:43:06.724006  760682 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 19:43:06.724081  760682 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 19:43:06.724086  760682 kubeadm.go:318] 
	I1017 19:43:06.724178  760682 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 19:43:06.724263  760682 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 19:43:06.724268  760682 kubeadm.go:318] 
	I1017 19:43:06.724432  760682 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 8swro8.qh9sbwijb0bw9qm9 \
	I1017 19:43:06.724557  760682 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ae4b222593b9932ac318f80ad834fe09d4c8ed481133016b5c410bf2757b648e \
	I1017 19:43:06.724583  760682 kubeadm.go:318] 	--control-plane 
	I1017 19:43:06.724588  760682 kubeadm.go:318] 
	I1017 19:43:06.724690  760682 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 19:43:06.724700  760682 kubeadm.go:318] 
	I1017 19:43:06.724792  760682 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 8swro8.qh9sbwijb0bw9qm9 \
	I1017 19:43:06.724916  760682 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ae4b222593b9932ac318f80ad834fe09d4c8ed481133016b5c410bf2757b648e 
	I1017 19:43:06.729000  760682 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1017 19:43:06.729180  760682 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 19:43:06.729266  760682 cni.go:84] Creating CNI manager for ""
	I1017 19:43:06.729280  760682 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:43:06.732501  760682 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 19:43:06.733852  760682 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 19:43:06.741033  760682 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 19:43:06.741061  760682 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 19:43:06.760640  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 19:43:05.956523  761258 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1017 19:43:05.973781  761258 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1017 19:43:06.003226  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-448344 minikube.k8s.io/updated_at=2025_10_17T19_43_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d minikube.k8s.io/name=enable-default-cni-448344 minikube.k8s.io/primary=true
	I1017 19:43:06.003649  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:06.003796  761258 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 19:43:06.182165  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:06.182271  761258 ops.go:34] apiserver oom_adj: -16
	I1017 19:43:06.682849  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:07.182571  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:07.682305  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:06.431624  766638 addons.go:514] duration metric: took 2.662217197s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1017 19:43:06.910912  766638 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1017 19:43:06.916636  766638 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1017 19:43:06.917918  766638 api_server.go:141] control plane version: v1.34.1
	I1017 19:43:06.917956  766638 api_server.go:131] duration metric: took 507.587162ms to wait for apiserver health ...
	I1017 19:43:06.917965  766638 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:43:06.922920  766638 system_pods.go:59] 8 kube-system pods found
	I1017 19:43:06.922977  766638 system_pods.go:61] "coredns-66bc5c9577-vckxk" [40aad458-e537-456b-8932-594d8406d02d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:43:06.922993  766638 system_pods.go:61] "etcd-default-k8s-diff-port-112878" [0ef85596-9da6-4a2d-9d8c-1007c10aa5c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:43:06.923004  766638 system_pods.go:61] "kindnet-xvc9b" [9d53d141-fce2-4ae1-a29b-4cd44dd4fdea] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 19:43:06.923019  766638 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-112878" [3611109b-a6cb-4b9e-8ef4-8cd67a6b6d5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:43:06.923028  766638 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-112878" [70168103-f8fc-46fc-8378-869752d9d9f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:43:06.923040  766638 system_pods.go:61] "kube-proxy-d2jpw" [72c3c32f-e74f-46d2-a943-ca279ef893c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 19:43:06.923095  766638 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-112878" [bec0ce92-ff18-4f92-9085-c601198dacc4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:43:06.923109  766638 system_pods.go:61] "storage-provisioner" [7ffb3a0e-4e95-4f0b-940d-c96fec7aa2cc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:43:06.923118  766638 system_pods.go:74] duration metric: took 5.145823ms to wait for pod list to return data ...
	I1017 19:43:06.923136  766638 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:43:06.925918  766638 default_sa.go:45] found service account: "default"
	I1017 19:43:06.925944  766638 default_sa.go:55] duration metric: took 2.800342ms for default service account to be created ...
	I1017 19:43:06.925959  766638 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 19:43:06.929147  766638 system_pods.go:86] 8 kube-system pods found
	I1017 19:43:06.929207  766638 system_pods.go:89] "coredns-66bc5c9577-vckxk" [40aad458-e537-456b-8932-594d8406d02d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:43:06.929266  766638 system_pods.go:89] "etcd-default-k8s-diff-port-112878" [0ef85596-9da6-4a2d-9d8c-1007c10aa5c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:43:06.929291  766638 system_pods.go:89] "kindnet-xvc9b" [9d53d141-fce2-4ae1-a29b-4cd44dd4fdea] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 19:43:06.929299  766638 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-112878" [3611109b-a6cb-4b9e-8ef4-8cd67a6b6d5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:43:06.929309  766638 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-112878" [70168103-f8fc-46fc-8378-869752d9d9f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:43:06.929316  766638 system_pods.go:89] "kube-proxy-d2jpw" [72c3c32f-e74f-46d2-a943-ca279ef893c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 19:43:06.929333  766638 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-112878" [bec0ce92-ff18-4f92-9085-c601198dacc4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:43:06.929341  766638 system_pods.go:89] "storage-provisioner" [7ffb3a0e-4e95-4f0b-940d-c96fec7aa2cc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:43:06.929351  766638 system_pods.go:126] duration metric: took 3.385286ms to wait for k8s-apps to be running ...
	I1017 19:43:06.929373  766638 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:43:06.929433  766638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:43:06.944989  766638 system_svc.go:56] duration metric: took 15.602005ms WaitForService to wait for kubelet
	I1017 19:43:06.945024  766638 kubeadm.go:586] duration metric: took 3.175691952s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:43:06.945048  766638 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:43:06.948234  766638 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 19:43:06.948272  766638 node_conditions.go:123] node cpu capacity is 8
	I1017 19:43:06.948291  766638 node_conditions.go:105] duration metric: took 3.236546ms to run NodePressure ...
	I1017 19:43:06.948308  766638 start.go:241] waiting for startup goroutines ...
	I1017 19:43:06.948320  766638 start.go:246] waiting for cluster config update ...
	I1017 19:43:06.948338  766638 start.go:255] writing updated cluster config ...
	I1017 19:43:06.948713  766638 ssh_runner.go:195] Run: rm -f paused
	I1017 19:43:06.955232  766638 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:43:06.959214  766638 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vckxk" in "kube-system" namespace to be "Ready" or be gone ...
	W1017 19:43:08.964913  766638 pod_ready.go:104] pod "coredns-66bc5c9577-vckxk" is not "Ready", error: <nil>
	I1017 19:43:08.182714  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:08.682813  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:09.182970  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:09.682324  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:10.183036  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:10.682889  761258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:10.780997  761258 kubeadm.go:1113] duration metric: took 4.777398825s to wait for elevateKubeSystemPrivileges
	I1017 19:43:10.781043  761258 kubeadm.go:402] duration metric: took 16.212143267s to StartCluster
	I1017 19:43:10.781082  761258 settings.go:142] acquiring lock: {Name:mkb8ebc6edbbb6915dd74086f502bcc2721555a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:43:10.781175  761258 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:43:10.782495  761258 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/kubeconfig: {Name:mkc99c1a086f83f30612e2820a6063c20b9217b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:43:10.782794  761258 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:43:10.782815  761258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 19:43:10.782906  761258 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 19:43:10.782997  761258 config.go:182] Loaded profile config "enable-default-cni-448344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:43:10.783006  761258 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-448344"
	I1017 19:43:10.783008  761258 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-448344"
	I1017 19:43:10.783025  761258 addons.go:238] Setting addon storage-provisioner=true in "enable-default-cni-448344"
	I1017 19:43:10.783035  761258 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-448344"
	I1017 19:43:10.783063  761258 host.go:66] Checking if "enable-default-cni-448344" exists ...
	I1017 19:43:10.783424  761258 cli_runner.go:164] Run: docker container inspect enable-default-cni-448344 --format={{.State.Status}}
	I1017 19:43:10.783605  761258 cli_runner.go:164] Run: docker container inspect enable-default-cni-448344 --format={{.State.Status}}
	I1017 19:43:10.784791  761258 out.go:179] * Verifying Kubernetes components...
	I1017 19:43:10.786202  761258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:43:10.810001  761258 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 19:43:07.050227  760682 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 19:43:07.050291  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:07.050325  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-448344 minikube.k8s.io/updated_at=2025_10_17T19_43_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d minikube.k8s.io/name=auto-448344 minikube.k8s.io/primary=true
	I1017 19:43:07.134622  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:07.148253  760682 ops.go:34] apiserver oom_adj: -16
	I1017 19:43:07.636724  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:08.134822  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:08.635079  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:09.135302  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:09.634674  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:10.135730  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:10.634924  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:11.135610  760682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:43:11.276013  760682 kubeadm.go:1113] duration metric: took 4.225780364s to wait for elevateKubeSystemPrivileges
	I1017 19:43:11.276050  760682 kubeadm.go:402] duration metric: took 16.677676822s to StartCluster
	I1017 19:43:11.276074  760682 settings.go:142] acquiring lock: {Name:mkb8ebc6edbbb6915dd74086f502bcc2721555a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:43:11.276139  760682 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:43:11.277979  760682 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/kubeconfig: {Name:mkc99c1a086f83f30612e2820a6063c20b9217b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:43:11.278260  760682 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:43:11.278979  760682 config.go:182] Loaded profile config "auto-448344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:43:11.279031  760682 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 19:43:11.279110  760682 addons.go:69] Setting storage-provisioner=true in profile "auto-448344"
	I1017 19:43:11.279129  760682 addons.go:238] Setting addon storage-provisioner=true in "auto-448344"
	I1017 19:43:11.279160  760682 host.go:66] Checking if "auto-448344" exists ...
	I1017 19:43:11.279742  760682 cli_runner.go:164] Run: docker container inspect auto-448344 --format={{.State.Status}}
	I1017 19:43:11.279926  760682 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 19:43:11.280373  760682 addons.go:69] Setting default-storageclass=true in profile "auto-448344"
	I1017 19:43:11.280397  760682 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-448344"
	I1017 19:43:11.280756  760682 cli_runner.go:164] Run: docker container inspect auto-448344 --format={{.State.Status}}
	I1017 19:43:11.282501  760682 out.go:179] * Verifying Kubernetes components...
	I1017 19:43:06.594508  769029 out.go:252] * Restarting existing docker container for "newest-cni-438547" ...
	I1017 19:43:06.594591  769029 cli_runner.go:164] Run: docker start newest-cni-438547
	I1017 19:43:07.007217  769029 cli_runner.go:164] Run: docker container inspect newest-cni-438547 --format={{.State.Status}}
	I1017 19:43:07.031616  769029 kic.go:430] container "newest-cni-438547" state is running.
	I1017 19:43:07.032141  769029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438547
	I1017 19:43:07.053527  769029 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/newest-cni-438547/config.json ...
	I1017 19:43:07.053907  769029 machine.go:93] provisionDockerMachine start ...
	I1017 19:43:07.054003  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:07.077736  769029 main.go:141] libmachine: Using SSH client type: native
	I1017 19:43:07.078071  769029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1017 19:43:07.078091  769029 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:43:07.078915  769029 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35926->127.0.0.1:33478: read: connection reset by peer
	I1017 19:43:10.244014  769029 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-438547
	
	I1017 19:43:10.244048  769029 ubuntu.go:182] provisioning hostname "newest-cni-438547"
	I1017 19:43:10.244117  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:10.272575  769029 main.go:141] libmachine: Using SSH client type: native
	I1017 19:43:10.273026  769029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1017 19:43:10.273046  769029 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-438547 && echo "newest-cni-438547" | sudo tee /etc/hostname
	I1017 19:43:10.464806  769029 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-438547
	
	I1017 19:43:10.464892  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:10.490588  769029 main.go:141] libmachine: Using SSH client type: native
	I1017 19:43:10.491359  769029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1017 19:43:10.491396  769029 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-438547' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-438547/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-438547' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:43:10.649799  769029 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:43:10.649835  769029 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-492109/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-492109/.minikube}
	I1017 19:43:10.649871  769029 ubuntu.go:190] setting up certificates
	I1017 19:43:10.649885  769029 provision.go:84] configureAuth start
	I1017 19:43:10.649950  769029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438547
	I1017 19:43:10.677218  769029 provision.go:143] copyHostCerts
	I1017 19:43:10.677285  769029 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem, removing ...
	I1017 19:43:10.677310  769029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem
	I1017 19:43:10.677396  769029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/ca.pem (1078 bytes)
	I1017 19:43:10.677535  769029 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem, removing ...
	I1017 19:43:10.677545  769029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem
	I1017 19:43:10.677589  769029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/cert.pem (1123 bytes)
	I1017 19:43:10.677679  769029 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem, removing ...
	I1017 19:43:10.677701  769029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem
	I1017 19:43:10.677745  769029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-492109/.minikube/key.pem (1679 bytes)
	I1017 19:43:10.677888  769029 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem org=jenkins.newest-cni-438547 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-438547]
	I1017 19:43:10.852497  769029 provision.go:177] copyRemoteCerts
	I1017 19:43:10.852627  769029 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:43:10.852692  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:10.884273  769029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/newest-cni-438547/id_rsa Username:docker}
	I1017 19:43:11.006525  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 19:43:11.042956  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 19:43:11.076327  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:43:11.108001  769029 provision.go:87] duration metric: took 458.094838ms to configureAuth
	I1017 19:43:11.108040  769029 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:43:11.108300  769029 config.go:182] Loaded profile config "newest-cni-438547": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:43:11.108449  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:11.135940  769029 main.go:141] libmachine: Using SSH client type: native
	I1017 19:43:11.136247  769029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1017 19:43:11.136267  769029 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:43:11.284740  760682 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:43:11.322018  760682 addons.go:238] Setting addon default-storageclass=true in "auto-448344"
	I1017 19:43:11.322105  760682 host.go:66] Checking if "auto-448344" exists ...
	I1017 19:43:11.322599  760682 cli_runner.go:164] Run: docker container inspect auto-448344 --format={{.State.Status}}
	I1017 19:43:11.324535  760682 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 19:43:10.810740  761258 addons.go:238] Setting addon default-storageclass=true in "enable-default-cni-448344"
	I1017 19:43:10.810793  761258 host.go:66] Checking if "enable-default-cni-448344" exists ...
	I1017 19:43:10.811313  761258 cli_runner.go:164] Run: docker container inspect enable-default-cni-448344 --format={{.State.Status}}
	I1017 19:43:10.811674  761258 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:43:10.811813  761258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 19:43:10.811913  761258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-448344
	I1017 19:43:10.855355  761258 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 19:43:10.855380  761258 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 19:43:10.855445  761258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-448344
	I1017 19:43:10.855760  761258 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/enable-default-cni-448344/id_rsa Username:docker}
	I1017 19:43:10.886087  761258 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/enable-default-cni-448344/id_rsa Username:docker}
	I1017 19:43:10.929245  761258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 19:43:10.994421  761258 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:43:11.007316  761258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:43:11.043725  761258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 19:43:11.274930  761258 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1017 19:43:11.276268  761258 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-448344" to be "Ready" ...
	I1017 19:43:11.309388  761258 node_ready.go:49] node "enable-default-cni-448344" is "Ready"
	I1017 19:43:11.309433  761258 node_ready.go:38] duration metric: took 33.139971ms for node "enable-default-cni-448344" to be "Ready" ...
	I1017 19:43:11.309471  761258 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:43:11.309674  761258 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:43:11.694162  761258 api_server.go:72] duration metric: took 911.320616ms to wait for apiserver process to appear ...
	I1017 19:43:11.694193  761258 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:43:11.694215  761258 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1017 19:43:11.710573  761258 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1017 19:43:11.712375  761258 api_server.go:141] control plane version: v1.34.1
	I1017 19:43:11.712413  761258 api_server.go:131] duration metric: took 18.211885ms to wait for apiserver health ...
	I1017 19:43:11.712424  761258 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:43:11.713505  761258 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1017 19:43:11.326008  760682 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:43:11.326029  760682 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 19:43:11.326104  760682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-448344
	I1017 19:43:11.366713  760682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/auto-448344/id_rsa Username:docker}
	I1017 19:43:11.371949  760682 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 19:43:11.372028  760682 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 19:43:11.372154  760682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-448344
	I1017 19:43:11.405329  760682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/auto-448344/id_rsa Username:docker}
	I1017 19:43:11.529442  760682 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:43:11.529658  760682 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 19:43:11.557258  760682 node_ready.go:35] waiting up to 15m0s for node "auto-448344" to be "Ready" ...
	I1017 19:43:11.588966  760682 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 19:43:11.593476  760682 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:43:11.866988  760682 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1017 19:43:12.112780  760682 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1017 19:43:11.715142  761258 addons.go:514] duration metric: took 932.242897ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1017 19:43:11.722883  761258 system_pods.go:59] 8 kube-system pods found
	I1017 19:43:11.722947  761258 system_pods.go:61] "coredns-66bc5c9577-frpnt" [7983e2df-2598-460b-8f12-006554076f00] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:43:11.722961  761258 system_pods.go:61] "coredns-66bc5c9577-r5brm" [d4d75925-5c02-4ebc-8def-89369fde949b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:43:11.722980  761258 system_pods.go:61] "etcd-enable-default-cni-448344" [4bcbcf5f-6477-4042-96aa-264c9f5cdb46] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:43:11.722999  761258 system_pods.go:61] "kube-apiserver-enable-default-cni-448344" [92060ff1-2299-4284-bf52-3550b852c490] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:43:11.723028  761258 system_pods.go:61] "kube-controller-manager-enable-default-cni-448344" [3fc1f75c-e665-4582-b13a-99cf265b4a6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:43:11.723044  761258 system_pods.go:61] "kube-proxy-djghb" [c5d7ed52-990c-41a1-91d6-8d934775891b] Running
	I1017 19:43:11.723052  761258 system_pods.go:61] "kube-scheduler-enable-default-cni-448344" [af441ecd-8c3f-478b-8d76-74e270caa7f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:43:11.723059  761258 system_pods.go:61] "storage-provisioner" [2024497c-706b-4a1e-8ea6-ca5118bac96a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:43:11.723083  761258 system_pods.go:74] duration metric: took 10.650615ms to wait for pod list to return data ...
	I1017 19:43:11.723102  761258 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:43:11.728867  761258 default_sa.go:45] found service account: "default"
	I1017 19:43:11.728918  761258 default_sa.go:55] duration metric: took 5.806956ms for default service account to be created ...
	I1017 19:43:11.728933  761258 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 19:43:11.735569  761258 system_pods.go:86] 8 kube-system pods found
	I1017 19:43:11.735616  761258 system_pods.go:89] "coredns-66bc5c9577-frpnt" [7983e2df-2598-460b-8f12-006554076f00] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:43:11.735629  761258 system_pods.go:89] "coredns-66bc5c9577-r5brm" [d4d75925-5c02-4ebc-8def-89369fde949b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:43:11.735639  761258 system_pods.go:89] "etcd-enable-default-cni-448344" [4bcbcf5f-6477-4042-96aa-264c9f5cdb46] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:43:11.735649  761258 system_pods.go:89] "kube-apiserver-enable-default-cni-448344" [92060ff1-2299-4284-bf52-3550b852c490] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:43:11.735661  761258 system_pods.go:89] "kube-controller-manager-enable-default-cni-448344" [3fc1f75c-e665-4582-b13a-99cf265b4a6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:43:11.735668  761258 system_pods.go:89] "kube-proxy-djghb" [c5d7ed52-990c-41a1-91d6-8d934775891b] Running
	I1017 19:43:11.735676  761258 system_pods.go:89] "kube-scheduler-enable-default-cni-448344" [af441ecd-8c3f-478b-8d76-74e270caa7f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:43:11.735693  761258 system_pods.go:89] "storage-provisioner" [2024497c-706b-4a1e-8ea6-ca5118bac96a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:43:11.735705  761258 system_pods.go:126] duration metric: took 6.763012ms to wait for k8s-apps to be running ...
	I1017 19:43:11.735715  761258 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:43:11.735772  761258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:43:11.757352  761258 system_svc.go:56] duration metric: took 21.628393ms WaitForService to wait for kubelet
	I1017 19:43:11.757386  761258 kubeadm.go:586] duration metric: took 974.552141ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:43:11.757408  761258 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:43:11.763786  761258 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 19:43:11.763817  761258 node_conditions.go:123] node cpu capacity is 8
	I1017 19:43:11.763834  761258 node_conditions.go:105] duration metric: took 6.419185ms to run NodePressure ...
	I1017 19:43:11.763912  761258 start.go:241] waiting for startup goroutines ...
	I1017 19:43:11.780086  761258 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-448344" context rescaled to 1 replicas
	I1017 19:43:11.780205  761258 start.go:246] waiting for cluster config update ...
	I1017 19:43:11.780227  761258 start.go:255] writing updated cluster config ...
	I1017 19:43:11.780586  761258 ssh_runner.go:195] Run: rm -f paused
	I1017 19:43:11.788295  761258 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:43:11.793776  761258 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-frpnt" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:43:11.588902  769029 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:43:11.588929  769029 machine.go:96] duration metric: took 4.53500188s to provisionDockerMachine
	I1017 19:43:11.588943  769029 start.go:293] postStartSetup for "newest-cni-438547" (driver="docker")
	I1017 19:43:11.588972  769029 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:43:11.589045  769029 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:43:11.589091  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:11.618485  769029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/newest-cni-438547/id_rsa Username:docker}
	I1017 19:43:11.748813  769029 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:43:11.755793  769029 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:43:11.755828  769029 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:43:11.755842  769029 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/addons for local assets ...
	I1017 19:43:11.755912  769029 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-492109/.minikube/files for local assets ...
	I1017 19:43:11.756008  769029 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem -> 4957252.pem in /etc/ssl/certs
	I1017 19:43:11.756148  769029 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:43:11.773489  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem --> /etc/ssl/certs/4957252.pem (1708 bytes)
	I1017 19:43:11.805867  769029 start.go:296] duration metric: took 216.903683ms for postStartSetup
	I1017 19:43:11.805973  769029 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:43:11.806027  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:11.833201  769029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/newest-cni-438547/id_rsa Username:docker}
	I1017 19:43:11.942583  769029 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:43:11.952562  769029 fix.go:56] duration metric: took 5.382564977s for fixHost
	I1017 19:43:11.952597  769029 start.go:83] releasing machines lock for "newest-cni-438547", held for 5.382631851s
	I1017 19:43:11.952672  769029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438547
	I1017 19:43:11.976386  769029 ssh_runner.go:195] Run: cat /version.json
	I1017 19:43:11.976491  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:11.976509  769029 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:43:11.976664  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:12.005417  769029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/newest-cni-438547/id_rsa Username:docker}
	I1017 19:43:12.007192  769029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/newest-cni-438547/id_rsa Username:docker}
	I1017 19:43:12.120622  769029 ssh_runner.go:195] Run: systemctl --version
	I1017 19:43:12.218267  769029 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:43:12.270948  769029 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:43:12.277770  769029 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:43:12.277845  769029 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:43:12.290627  769029 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:43:12.290657  769029 start.go:495] detecting cgroup driver to use...
	I1017 19:43:12.290715  769029 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 19:43:12.290778  769029 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:43:12.313299  769029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:43:12.333776  769029 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:43:12.333836  769029 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:43:12.354902  769029 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:43:12.373087  769029 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:43:12.488958  769029 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:43:12.597251  769029 docker.go:234] disabling docker service ...
	I1017 19:43:12.597309  769029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:43:12.614620  769029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:43:12.630483  769029 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:43:12.733453  769029 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:43:12.829488  769029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:43:12.844553  769029 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:43:12.862001  769029 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:43:12.862081  769029 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:43:12.872938  769029 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 19:43:12.873024  769029 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:43:12.884478  769029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:43:12.896521  769029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:43:12.907132  769029 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:43:12.917745  769029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:43:12.929051  769029 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:43:12.941179  769029 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:43:12.952369  769029 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:43:12.961822  769029 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:43:12.971004  769029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:43:13.068142  769029 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:43:13.545054  769029 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:43:13.545193  769029 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:43:13.550192  769029 start.go:563] Will wait 60s for crictl version
	I1017 19:43:13.550270  769029 ssh_runner.go:195] Run: which crictl
	I1017 19:43:13.555176  769029 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:43:13.584883  769029 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:43:13.584952  769029 ssh_runner.go:195] Run: crio --version
	I1017 19:43:13.616969  769029 ssh_runner.go:195] Run: crio --version
	I1017 19:43:13.654295  769029 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:43:13.655638  769029 cli_runner.go:164] Run: docker network inspect newest-cni-438547 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:43:13.676350  769029 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1017 19:43:13.681804  769029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:43:13.696745  769029 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1017 19:43:13.697943  769029 kubeadm.go:883] updating cluster {Name:newest-cni-438547 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:43:13.698117  769029 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:43:13.698203  769029 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:43:13.735573  769029 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:43:13.735603  769029 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:43:13.735662  769029 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:43:13.767068  769029 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:43:13.767092  769029 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:43:13.767107  769029 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1017 19:43:13.767210  769029 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-438547 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:43:13.767275  769029 ssh_runner.go:195] Run: crio config
	I1017 19:43:13.823554  769029 cni.go:84] Creating CNI manager for ""
	I1017 19:43:13.823582  769029 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:43:13.823607  769029 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1017 19:43:13.823638  769029 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-438547 NodeName:newest-cni-438547 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:43:13.823829  769029 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-438547"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:43:13.823912  769029 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:43:13.834125  769029 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:43:13.834224  769029 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 19:43:13.846415  769029 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1017 19:43:13.864709  769029 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:43:13.882057  769029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1017 19:43:13.899973  769029 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1017 19:43:13.905150  769029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:43:13.919261  769029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:43:14.044492  769029 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:43:14.071853  769029 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/newest-cni-438547 for IP: 192.168.103.2
	I1017 19:43:14.071878  769029 certs.go:195] generating shared ca certs ...
	I1017 19:43:14.071900  769029 certs.go:227] acquiring lock for ca certs: {Name:mkc97483d62151ba5c32d923dd19e3e2b3661468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:43:14.072080  769029 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key
	I1017 19:43:14.072150  769029 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key
	I1017 19:43:14.072161  769029 certs.go:257] generating profile certs ...
	I1017 19:43:14.072402  769029 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/newest-cni-438547/client.key
	I1017 19:43:14.072487  769029 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/newest-cni-438547/apiserver.key.df6baa7a
	I1017 19:43:14.072531  769029 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/newest-cni-438547/proxy-client.key
	I1017 19:43:14.072666  769029 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725.pem (1338 bytes)
	W1017 19:43:14.072771  769029 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725_empty.pem, impossibly tiny 0 bytes
	I1017 19:43:14.072797  769029 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 19:43:14.072845  769029 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/ca.pem (1078 bytes)
	I1017 19:43:14.072877  769029 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:43:14.072903  769029 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/certs/key.pem (1679 bytes)
	I1017 19:43:14.073021  769029 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem (1708 bytes)
	I1017 19:43:14.074186  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:43:14.100888  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:43:14.126129  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:43:14.149921  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 19:43:14.183666  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/newest-cni-438547/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 19:43:14.208296  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/newest-cni-438547/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:43:14.233165  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/newest-cni-438547/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:43:14.259177  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/newest-cni-438547/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:43:14.282988  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/ssl/certs/4957252.pem --> /usr/share/ca-certificates/4957252.pem (1708 bytes)
	I1017 19:43:14.307613  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:43:14.331583  769029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-492109/.minikube/certs/495725.pem --> /usr/share/ca-certificates/495725.pem (1338 bytes)
	I1017 19:43:14.357401  769029 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:43:14.373675  769029 ssh_runner.go:195] Run: openssl version
	I1017 19:43:14.381023  769029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4957252.pem && ln -fs /usr/share/ca-certificates/4957252.pem /etc/ssl/certs/4957252.pem"
	I1017 19:43:14.391990  769029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4957252.pem
	I1017 19:43:14.396328  769029 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/4957252.pem
	I1017 19:43:14.396407  769029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4957252.pem
	I1017 19:43:14.439212  769029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4957252.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:43:14.451347  769029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:43:14.462548  769029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:43:14.468226  769029 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:43:14.468277  769029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:43:14.508308  769029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:43:14.519832  769029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/495725.pem && ln -fs /usr/share/ca-certificates/495725.pem /etc/ssl/certs/495725.pem"
	I1017 19:43:14.530589  769029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/495725.pem
	I1017 19:43:14.536202  769029 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/495725.pem
	I1017 19:43:14.536272  769029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/495725.pem
	I1017 19:43:14.584197  769029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/495725.pem /etc/ssl/certs/51391683.0"
	I1017 19:43:14.593801  769029 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:43:14.598812  769029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:43:14.642012  769029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:43:14.689390  769029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:43:14.750185  769029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:43:14.810787  769029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:43:14.867293  769029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:43:14.923926  769029 kubeadm.go:400] StartCluster: {Name:newest-cni-438547 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:43:14.924114  769029 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:43:14.924253  769029 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:43:14.968342  769029 cri.go:89] found id: "783ad2b5346ea181a472270de81a22e9136094d7a4a6901197f9b3b4dd831dd6"
	I1017 19:43:14.968389  769029 cri.go:89] found id: "fba4a1410021bdf673cba310189091795eb97198d5419e4df6a5ea9b8ceea611"
	I1017 19:43:14.968396  769029 cri.go:89] found id: "2e544eb21d59ec702243e34c0c9957da878518767a5d668acdbf48ab0caa8515"
	I1017 19:43:14.968402  769029 cri.go:89] found id: "8140e5435bac0f77a7bf313d441166129425c73e3e1d7fabfc13834d3cfa44bd"
	I1017 19:43:14.968408  769029 cri.go:89] found id: ""
	I1017 19:43:14.968456  769029 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 19:43:14.985899  769029 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:43:14Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:43:14.985989  769029 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 19:43:14.997604  769029 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 19:43:14.997638  769029 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 19:43:14.997713  769029 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 19:43:15.013664  769029 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:43:15.016226  769029 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-438547" does not appear in /home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:43:15.017235  769029 kubeconfig.go:62] /home/jenkins/minikube-integration/21753-492109/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-438547" cluster setting kubeconfig missing "newest-cni-438547" context setting]
	I1017 19:43:15.019314  769029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/kubeconfig: {Name:mkc99c1a086f83f30612e2820a6063c20b9217b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:43:15.021434  769029 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 19:43:15.032992  769029 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1017 19:43:15.033043  769029 kubeadm.go:601] duration metric: took 35.397345ms to restartPrimaryControlPlane
	I1017 19:43:15.033057  769029 kubeadm.go:402] duration metric: took 109.148342ms to StartCluster
	I1017 19:43:15.033082  769029 settings.go:142] acquiring lock: {Name:mkb8ebc6edbbb6915dd74086f502bcc2721555a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:43:15.033205  769029 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:43:15.035434  769029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/kubeconfig: {Name:mkc99c1a086f83f30612e2820a6063c20b9217b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:43:15.035742  769029 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:43:15.035914  769029 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 19:43:15.036025  769029 config.go:182] Loaded profile config "newest-cni-438547": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:43:15.036040  769029 addons.go:69] Setting dashboard=true in profile "newest-cni-438547"
	I1017 19:43:15.036056  769029 addons.go:238] Setting addon dashboard=true in "newest-cni-438547"
	W1017 19:43:15.036073  769029 addons.go:247] addon dashboard should already be in state true
	I1017 19:43:15.036029  769029 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-438547"
	I1017 19:43:15.036088  769029 addons.go:69] Setting default-storageclass=true in profile "newest-cni-438547"
	I1017 19:43:15.036112  769029 host.go:66] Checking if "newest-cni-438547" exists ...
	I1017 19:43:15.036123  769029 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-438547"
	I1017 19:43:15.036094  769029 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-438547"
	W1017 19:43:15.036235  769029 addons.go:247] addon storage-provisioner should already be in state true
	I1017 19:43:15.036258  769029 host.go:66] Checking if "newest-cni-438547" exists ...
	I1017 19:43:15.036502  769029 cli_runner.go:164] Run: docker container inspect newest-cni-438547 --format={{.State.Status}}
	I1017 19:43:15.036644  769029 cli_runner.go:164] Run: docker container inspect newest-cni-438547 --format={{.State.Status}}
	I1017 19:43:15.036930  769029 cli_runner.go:164] Run: docker container inspect newest-cni-438547 --format={{.State.Status}}
	I1017 19:43:15.042278  769029 out.go:179] * Verifying Kubernetes components...
	I1017 19:43:15.047340  769029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:43:15.064873  769029 addons.go:238] Setting addon default-storageclass=true in "newest-cni-438547"
	W1017 19:43:15.064899  769029 addons.go:247] addon default-storageclass should already be in state true
	I1017 19:43:15.064930  769029 host.go:66] Checking if "newest-cni-438547" exists ...
	I1017 19:43:15.065420  769029 cli_runner.go:164] Run: docker container inspect newest-cni-438547 --format={{.State.Status}}
	I1017 19:43:15.072536  769029 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1017 19:43:15.072614  769029 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 19:43:15.074636  769029 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:43:15.075211  769029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 19:43:15.075318  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:15.075181  769029 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1017 19:43:10.969605  766638 pod_ready.go:104] pod "coredns-66bc5c9577-vckxk" is not "Ready", error: <nil>
	W1017 19:43:13.465361  766638 pod_ready.go:104] pod "coredns-66bc5c9577-vckxk" is not "Ready", error: <nil>
	W1017 19:43:15.467307  766638 pod_ready.go:104] pod "coredns-66bc5c9577-vckxk" is not "Ready", error: <nil>
	I1017 19:43:15.076801  769029 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1017 19:43:15.076850  769029 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1017 19:43:15.076936  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:15.096916  769029 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 19:43:15.096946  769029 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 19:43:15.097026  769029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438547
	I1017 19:43:15.114759  769029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/newest-cni-438547/id_rsa Username:docker}
	I1017 19:43:15.116989  769029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/newest-cni-438547/id_rsa Username:docker}
	I1017 19:43:15.130146  769029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/newest-cni-438547/id_rsa Username:docker}
	I1017 19:43:15.224366  769029 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:43:15.246009  769029 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:43:15.246144  769029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:43:15.247206  769029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:43:15.249128  769029 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1017 19:43:15.249146  769029 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1017 19:43:15.269436  769029 api_server.go:72] duration metric: took 233.647317ms to wait for apiserver process to appear ...
	I1017 19:43:15.269471  769029 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:43:15.269495  769029 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 19:43:15.270227  769029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 19:43:15.279303  769029 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1017 19:43:15.279332  769029 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1017 19:43:15.306530  769029 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1017 19:43:15.306564  769029 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1017 19:43:15.326750  769029 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1017 19:43:15.326781  769029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1017 19:43:15.352386  769029 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1017 19:43:15.352417  769029 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1017 19:43:15.374346  769029 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1017 19:43:15.374388  769029 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1017 19:43:15.391299  769029 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1017 19:43:15.391339  769029 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1017 19:43:15.408563  769029 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1017 19:43:15.408603  769029 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1017 19:43:15.427279  769029 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 19:43:15.427308  769029 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1017 19:43:15.445397  769029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 19:43:12.114082  760682 addons.go:514] duration metric: took 835.040796ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1017 19:43:12.372372  760682 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-448344" context rescaled to 1 replicas
	W1017 19:43:13.560661  760682 node_ready.go:57] node "auto-448344" has "Ready":"False" status (will retry)
	W1017 19:43:15.561577  760682 node_ready.go:57] node "auto-448344" has "Ready":"False" status (will retry)
	I1017 19:43:16.925552  769029 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1017 19:43:16.925589  769029 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1017 19:43:16.925608  769029 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 19:43:16.934120  769029 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1017 19:43:16.934153  769029 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1017 19:43:17.270534  769029 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 19:43:17.275240  769029 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 19:43:17.275272  769029 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 19:43:17.516462  769029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.269195174s)
	I1017 19:43:17.516481  769029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.24621924s)
	I1017 19:43:17.516592  769029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.071155849s)
	I1017 19:43:17.518432  769029 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-438547 addons enable metrics-server
	
	I1017 19:43:17.530597  769029 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1017 19:43:13.800622  761258 pod_ready.go:104] pod "coredns-66bc5c9577-frpnt" is not "Ready", error: <nil>
	W1017 19:43:15.802200  761258 pod_ready.go:104] pod "coredns-66bc5c9577-frpnt" is not "Ready", error: <nil>
	I1017 19:43:17.532145  769029 addons.go:514] duration metric: took 2.496242951s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1017 19:43:17.769598  769029 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 19:43:17.773963  769029 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 19:43:17.773989  769029 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 19:43:18.270170  769029 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 19:43:18.275117  769029 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1017 19:43:18.276322  769029 api_server.go:141] control plane version: v1.34.1
	I1017 19:43:18.276364  769029 api_server.go:131] duration metric: took 3.006886118s to wait for apiserver health ...
	I1017 19:43:18.276374  769029 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:43:18.279789  769029 system_pods.go:59] 8 kube-system pods found
	I1017 19:43:18.279825  769029 system_pods.go:61] "coredns-66bc5c9577-8pfhn" [6d0a8a45-e3f8-4e59-b735-4f1236cf5953] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 19:43:18.279837  769029 system_pods.go:61] "etcd-newest-cni-438547" [aaf7399b-5274-44fa-a929-a515b9341276] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:43:18.279846  769029 system_pods.go:61] "kindnet-nhg7f" [368f40c9-2ab9-4d9d-9310-950d3371f4c0] Running
	I1017 19:43:18.279868  769029 system_pods.go:61] "kube-apiserver-newest-cni-438547" [25c05b7c-518e-4bc1-94cc-e2a8a04f104b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:43:18.279882  769029 system_pods.go:61] "kube-controller-manager-newest-cni-438547" [eba5d490-129b-4739-95bd-e10a4fd73c40] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:43:18.279890  769029 system_pods.go:61] "kube-proxy-zfk4z" [a38161c3-4097-4e85-b391-e3b730dd90b6] Running
	I1017 19:43:18.279898  769029 system_pods.go:61] "kube-scheduler-newest-cni-438547" [8210e114-0804-429b-8518-30042567db4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:43:18.279907  769029 system_pods.go:61] "storage-provisioner" [39d961dc-a8fd-4066-b46e-3e02ec6d04f6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 19:43:18.279914  769029 system_pods.go:74] duration metric: took 3.534199ms to wait for pod list to return data ...
	I1017 19:43:18.279928  769029 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:43:18.283213  769029 default_sa.go:45] found service account: "default"
	I1017 19:43:18.283237  769029 default_sa.go:55] duration metric: took 3.295545ms for default service account to be created ...
	I1017 19:43:18.283249  769029 kubeadm.go:586] duration metric: took 3.247469876s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 19:43:18.283267  769029 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:43:18.285877  769029 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 19:43:18.285901  769029 node_conditions.go:123] node cpu capacity is 8
	I1017 19:43:18.285916  769029 node_conditions.go:105] duration metric: took 2.645649ms to run NodePressure ...
	I1017 19:43:18.285928  769029 start.go:241] waiting for startup goroutines ...
	I1017 19:43:18.285935  769029 start.go:246] waiting for cluster config update ...
	I1017 19:43:18.285945  769029 start.go:255] writing updated cluster config ...
	I1017 19:43:18.286199  769029 ssh_runner.go:195] Run: rm -f paused
	I1017 19:43:18.338456  769029 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 19:43:18.341423  769029 out.go:179] * Done! kubectl is now configured to use "newest-cni-438547" cluster and "default" namespace by default
	W1017 19:43:17.971240  766638 pod_ready.go:104] pod "coredns-66bc5c9577-vckxk" is not "Ready", error: <nil>
	W1017 19:43:20.465407  766638 pod_ready.go:104] pod "coredns-66bc5c9577-vckxk" is not "Ready", error: <nil>
	W1017 19:43:18.060406  760682 node_ready.go:57] node "auto-448344" has "Ready":"False" status (will retry)
	W1017 19:43:20.061360  760682 node_ready.go:57] node "auto-448344" has "Ready":"False" status (will retry)
	W1017 19:43:18.300646  761258 pod_ready.go:104] pod "coredns-66bc5c9577-frpnt" is not "Ready", error: <nil>
	W1017 19:43:20.799924  761258 pod_ready.go:104] pod "coredns-66bc5c9577-frpnt" is not "Ready", error: <nil>
	W1017 19:43:22.800021  761258 pod_ready.go:104] pod "coredns-66bc5c9577-frpnt" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.464469277Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.467936927Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=8ff39a9b-d3de-4391-b3fc-81d186b29d5d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.468606879Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c371cd10-1b01-455c-adcf-ed6315723d67 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.469718808Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.470156996Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.470590175Z" level=info msg="Ran pod sandbox c138f9334b126d7fd0e6a9c1b4678a36e2633d5363e264dba2b10d7c849be6d3 with infra container: kube-system/kube-proxy-zfk4z/POD" id=8ff39a9b-d3de-4391-b3fc-81d186b29d5d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.47080178Z" level=info msg="Ran pod sandbox adba52d0b36216dee9586ae2b99c77d48cfd4cd9bb88efb673ef24ae01166c50 with infra container: kube-system/kindnet-nhg7f/POD" id=c371cd10-1b01-455c-adcf-ed6315723d67 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.471989759Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=589b3a6f-5dd4-4f5a-96b3-a63d46819a52 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.472027367Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f2e6e3d2-2d2f-4511-989a-b3ea56e2f184 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.473048303Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=6c9805a3-fd23-4c57-b2fe-87147f0b42ef name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.473085075Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=7f7212b0-846e-40a7-9576-201fabbccc67 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.474181492Z" level=info msg="Creating container: kube-system/kube-proxy-zfk4z/kube-proxy" id=855d4096-0b03-4357-8d2a-71c40282b3b3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.474325864Z" level=info msg="Creating container: kube-system/kindnet-nhg7f/kindnet-cni" id=6d530d34-0eec-42a2-936b-5ea5dd6ca7e5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.474451619Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.474517993Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.478921513Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.479583435Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.481784614Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.482403469Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.51265261Z" level=info msg="Created container 396b79a83b6aad7be20450af8a558a28d65313c75e489cd893f4c91b119849ac: kube-system/kindnet-nhg7f/kindnet-cni" id=6d530d34-0eec-42a2-936b-5ea5dd6ca7e5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.513442457Z" level=info msg="Starting container: 396b79a83b6aad7be20450af8a558a28d65313c75e489cd893f4c91b119849ac" id=5800c0bf-da97-4e9c-aec3-6666c90e2b80 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.514894337Z" level=info msg="Created container 5cb60d7d09aab27fd00d9f02862df4772fb81077b845a75cc383b5fcfabe2bef: kube-system/kube-proxy-zfk4z/kube-proxy" id=855d4096-0b03-4357-8d2a-71c40282b3b3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.515557792Z" level=info msg="Started container" PID=1035 containerID=396b79a83b6aad7be20450af8a558a28d65313c75e489cd893f4c91b119849ac description=kube-system/kindnet-nhg7f/kindnet-cni id=5800c0bf-da97-4e9c-aec3-6666c90e2b80 name=/runtime.v1.RuntimeService/StartContainer sandboxID=adba52d0b36216dee9586ae2b99c77d48cfd4cd9bb88efb673ef24ae01166c50
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.515647399Z" level=info msg="Starting container: 5cb60d7d09aab27fd00d9f02862df4772fb81077b845a75cc383b5fcfabe2bef" id=e0efa087-9f2b-4327-89d4-112000764640 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:43:17 newest-cni-438547 crio[516]: time="2025-10-17T19:43:17.519164444Z" level=info msg="Started container" PID=1036 containerID=5cb60d7d09aab27fd00d9f02862df4772fb81077b845a75cc383b5fcfabe2bef description=kube-system/kube-proxy-zfk4z/kube-proxy id=e0efa087-9f2b-4327-89d4-112000764640 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c138f9334b126d7fd0e6a9c1b4678a36e2633d5363e264dba2b10d7c849be6d3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	5cb60d7d09aab       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 seconds ago       Running             kube-proxy                1                   c138f9334b126       kube-proxy-zfk4z                            kube-system
	396b79a83b6aa       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   adba52d0b3621       kindnet-nhg7f                               kube-system
	783ad2b5346ea       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 seconds ago       Running             etcd                      1                   fc46ff75d1185       etcd-newest-cni-438547                      kube-system
	fba4a1410021b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   9 seconds ago       Running             kube-scheduler            1                   2e66efd6c4f83       kube-scheduler-newest-cni-438547            kube-system
	2e544eb21d59e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   9 seconds ago       Running             kube-controller-manager   1                   fcfe720e63430       kube-controller-manager-newest-cni-438547   kube-system
	8140e5435bac0       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   9 seconds ago       Running             kube-apiserver            1                   999d8b403501a       kube-apiserver-newest-cni-438547            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-438547
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-438547
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=newest-cni-438547
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_42_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:42:35 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-438547
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:43:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:43:17 +0000   Fri, 17 Oct 2025 19:42:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:43:17 +0000   Fri, 17 Oct 2025 19:42:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:43:17 +0000   Fri, 17 Oct 2025 19:42:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 17 Oct 2025 19:43:17 +0000   Fri, 17 Oct 2025 19:42:32 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-438547
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                6f16ffd1-311d-4f27-b795-37ce231ef7a2
	  Boot ID:                    c8616e78-d085-41cd-a329-f2bcfd9cfa15
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-438547                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         48s
	  kube-system                 kindnet-nhg7f                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      41s
	  kube-system                 kube-apiserver-newest-cni-438547             250m (3%)     0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 kube-controller-manager-newest-cni-438547    200m (2%)     0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 kube-proxy-zfk4z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-scheduler-newest-cni-438547             100m (1%)     0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 39s                kube-proxy       
	  Normal  Starting                 6s                 kube-proxy       
	  Normal  Starting                 52s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)  kubelet          Node newest-cni-438547 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)  kubelet          Node newest-cni-438547 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)  kubelet          Node newest-cni-438547 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    47s                kubelet          Node newest-cni-438547 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  47s                kubelet          Node newest-cni-438547 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     47s                kubelet          Node newest-cni-438547 status is now: NodeHasSufficientPID
	  Normal  Starting                 47s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           42s                node-controller  Node newest-cni-438547 event: Registered Node newest-cni-438547 in Controller
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s (x7 over 10s)  kubelet          Node newest-cni-438547 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x6 over 10s)  kubelet          Node newest-cni-438547 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x6 over 10s)  kubelet          Node newest-cni-438547 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-438547 event: Registered Node newest-cni-438547 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022229] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023876] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.024898] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +2.047801] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +4.031525] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:00] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +16.382262] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +32.252567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee e4 05 02 02 de 08 06
	[  +0.011274] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 4e 5e a6 cc 79 08 06
	
	
	==> etcd [783ad2b5346ea181a472270de81a22e9136094d7a4a6901197f9b3b4dd831dd6] <==
	{"level":"warn","ts":"2025-10-17T19:43:16.060572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.072380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.082804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.094925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.102875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.112083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.120277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.128369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.137329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.145919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.154155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.162983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.172417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.182975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.192048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.200879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.209172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.219407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.227135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.236111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.244964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.261960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.270593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.279863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:16.353198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42956","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:43:24 up  3:25,  0 user,  load average: 3.27, 3.32, 2.22
	Linux newest-cni-438547 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [396b79a83b6aad7be20450af8a558a28d65313c75e489cd893f4c91b119849ac] <==
	I1017 19:43:17.740482       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 19:43:17.740774       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1017 19:43:17.740936       1 main.go:148] setting mtu 1500 for CNI 
	I1017 19:43:17.740958       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 19:43:17.740988       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T19:43:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 19:43:17.939463       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 19:43:17.939481       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 19:43:17.939488       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:43:17.939590       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 19:43:18.339576       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:43:18.339631       1 metrics.go:72] Registering metrics
	I1017 19:43:18.339732       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [8140e5435bac0f77a7bf313d441166129425c73e3e1d7fabfc13834d3cfa44bd] <==
	I1017 19:43:16.992809       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1017 19:43:16.992898       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 19:43:16.992963       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 19:43:16.993277       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 19:43:16.994158       1 aggregator.go:171] initial CRD sync complete...
	I1017 19:43:16.994186       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 19:43:16.994192       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 19:43:16.994199       1 cache.go:39] Caches are synced for autoregister controller
	I1017 19:43:16.994984       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 19:43:16.995079       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 19:43:17.002839       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1017 19:43:17.003275       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 19:43:17.011488       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 19:43:17.013022       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:43:17.256261       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 19:43:17.312054       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 19:43:17.346499       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 19:43:17.368421       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 19:43:17.377643       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 19:43:17.419200       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.152.206"}
	I1017 19:43:17.429632       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.105.28"}
	I1017 19:43:17.894757       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:43:20.584999       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:43:20.634541       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 19:43:20.834647       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2e544eb21d59ec702243e34c0c9957da878518767a5d668acdbf48ab0caa8515] <==
	I1017 19:43:20.269187       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 19:43:20.270310       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 19:43:20.275583       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 19:43:20.277847       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 19:43:20.280895       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 19:43:20.281802       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 19:43:20.281819       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 19:43:20.281839       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 19:43:20.281849       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 19:43:20.281874       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 19:43:20.281913       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 19:43:20.281910       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 19:43:20.282019       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-438547"
	I1017 19:43:20.282084       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1017 19:43:20.282091       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 19:43:20.286776       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:43:20.293612       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 19:43:20.293664       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 19:43:20.293710       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 19:43:20.293721       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 19:43:20.293728       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 19:43:20.331008       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:43:20.331035       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 19:43:20.331043       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 19:43:20.343376       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [5cb60d7d09aab27fd00d9f02862df4772fb81077b845a75cc383b5fcfabe2bef] <==
	I1017 19:43:17.559718       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:43:17.616622       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:43:17.717417       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:43:17.717455       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1017 19:43:17.717531       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:43:17.736274       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:43:17.736328       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:43:17.741868       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:43:17.742788       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:43:17.742822       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:43:17.744909       1 config.go:200] "Starting service config controller"
	I1017 19:43:17.744932       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:43:17.744939       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:43:17.744956       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:43:17.744975       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:43:17.744989       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:43:17.745019       1 config.go:309] "Starting node config controller"
	I1017 19:43:17.745031       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:43:17.745037       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:43:17.845856       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:43:17.845960       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:43:17.846060       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fba4a1410021bdf673cba310189091795eb97198d5419e4df6a5ea9b8ceea611] <==
	I1017 19:43:15.537612       1 serving.go:386] Generated self-signed cert in-memory
	W1017 19:43:16.939762       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 19:43:16.939827       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 19:43:16.939841       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 19:43:16.939849       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 19:43:16.970644       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 19:43:16.970768       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:43:16.976739       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:43:16.977275       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 19:43:16.977304       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:43:16.977428       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 19:43:17.077527       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:43:16 newest-cni-438547 kubelet[661]: E1017 19:43:16.215238     661 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-438547\" not found" node="newest-cni-438547"
	Oct 17 19:43:16 newest-cni-438547 kubelet[661]: E1017 19:43:16.215430     661 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-438547\" not found" node="newest-cni-438547"
	Oct 17 19:43:16 newest-cni-438547 kubelet[661]: E1017 19:43:16.216841     661 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-438547\" not found" node="newest-cni-438547"
	Oct 17 19:43:16 newest-cni-438547 kubelet[661]: I1017 19:43:16.961765     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-438547"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.023948     661 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-438547"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.024065     661 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-438547"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.024106     661 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.025655     661 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: E1017 19:43:17.081611     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-438547\" already exists" pod="kube-system/kube-controller-manager-newest-cni-438547"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.081655     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-438547"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: E1017 19:43:17.087166     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-438547\" already exists" pod="kube-system/kube-scheduler-newest-cni-438547"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.087208     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-438547"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: E1017 19:43:17.093834     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-438547\" already exists" pod="kube-system/etcd-newest-cni-438547"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.093868     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-438547"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: E1017 19:43:17.101137     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-438547\" already exists" pod="kube-system/kube-apiserver-newest-cni-438547"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.155881     661 apiserver.go:52] "Watching apiserver"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.161061     661 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.252985     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/368f40c9-2ab9-4d9d-9310-950d3371f4c0-lib-modules\") pod \"kindnet-nhg7f\" (UID: \"368f40c9-2ab9-4d9d-9310-950d3371f4c0\") " pod="kube-system/kindnet-nhg7f"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.253052     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a38161c3-4097-4e85-b391-e3b730dd90b6-xtables-lock\") pod \"kube-proxy-zfk4z\" (UID: \"a38161c3-4097-4e85-b391-e3b730dd90b6\") " pod="kube-system/kube-proxy-zfk4z"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.253082     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a38161c3-4097-4e85-b391-e3b730dd90b6-lib-modules\") pod \"kube-proxy-zfk4z\" (UID: \"a38161c3-4097-4e85-b391-e3b730dd90b6\") " pod="kube-system/kube-proxy-zfk4z"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.253437     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/368f40c9-2ab9-4d9d-9310-950d3371f4c0-cni-cfg\") pod \"kindnet-nhg7f\" (UID: \"368f40c9-2ab9-4d9d-9310-950d3371f4c0\") " pod="kube-system/kindnet-nhg7f"
	Oct 17 19:43:17 newest-cni-438547 kubelet[661]: I1017 19:43:17.253477     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/368f40c9-2ab9-4d9d-9310-950d3371f4c0-xtables-lock\") pod \"kindnet-nhg7f\" (UID: \"368f40c9-2ab9-4d9d-9310-950d3371f4c0\") " pod="kube-system/kindnet-nhg7f"
	Oct 17 19:43:19 newest-cni-438547 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 19:43:19 newest-cni-438547 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 19:43:19 newest-cni-438547 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-438547 -n newest-cni-438547
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-438547 -n newest-cni-438547: exit status 2 (331.864879ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-438547 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-8pfhn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tsn4q kubernetes-dashboard-855c9754f9-kx9tq
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-438547 describe pod coredns-66bc5c9577-8pfhn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tsn4q kubernetes-dashboard-855c9754f9-kx9tq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-438547 describe pod coredns-66bc5c9577-8pfhn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tsn4q kubernetes-dashboard-855c9754f9-kx9tq: exit status 1 (71.005412ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-8pfhn" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-tsn4q" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-kx9tq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-438547 describe pod coredns-66bc5c9577-8pfhn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tsn4q kubernetes-dashboard-855c9754f9-kx9tq: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (8.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-112878 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-112878 --alsologtostderr -v=1: exit status 80 (2.516011278s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-112878 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:43:51.798467  782063 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:43:51.798879  782063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:43:51.798896  782063 out.go:374] Setting ErrFile to fd 2...
	I1017 19:43:51.798903  782063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:43:51.799229  782063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:43:51.799641  782063 out.go:368] Setting JSON to false
	I1017 19:43:51.799708  782063 mustload.go:65] Loading cluster: default-k8s-diff-port-112878
	I1017 19:43:51.800207  782063 config.go:182] Loaded profile config "default-k8s-diff-port-112878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:43:51.800787  782063 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-112878 --format={{.State.Status}}
	I1017 19:43:51.822365  782063 host.go:66] Checking if "default-k8s-diff-port-112878" exists ...
	I1017 19:43:51.822811  782063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:43:51.887643  782063 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:92 SystemTime:2025-10-17 19:43:51.875311378 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:43:51.888365  782063 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-112878 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 19:43:51.890651  782063 out.go:179] * Pausing node default-k8s-diff-port-112878 ... 
	I1017 19:43:51.892363  782063 host.go:66] Checking if "default-k8s-diff-port-112878" exists ...
	I1017 19:43:51.892670  782063 ssh_runner.go:195] Run: systemctl --version
	I1017 19:43:51.892735  782063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-112878
	I1017 19:43:51.921876  782063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/default-k8s-diff-port-112878/id_rsa Username:docker}
	I1017 19:43:52.031478  782063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:43:52.044880  782063 pause.go:52] kubelet running: true
	I1017 19:43:52.044940  782063 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 19:43:52.221991  782063 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 19:43:52.222081  782063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 19:43:52.299776  782063 cri.go:89] found id: "2391fc7daf6f0c7ace1a6cb5b28f6b03f222c2106a49c1868f7146db0a965dd7"
	I1017 19:43:52.299800  782063 cri.go:89] found id: "92bd478433933f55dafbdd5f5569e0622bf677028ee7463d99447247a33cae6d"
	I1017 19:43:52.299804  782063 cri.go:89] found id: "912bc52145327590baf7aab5df00ca36437b09ad6e97370f085bd5d74f82ddcd"
	I1017 19:43:52.299808  782063 cri.go:89] found id: "592cdb02b1e2a86a61705ebc3560af11df6ef568e4e5c623da98e983c8a1cc61"
	I1017 19:43:52.299810  782063 cri.go:89] found id: "7cdc338f648b8ccfe127c89d4264cccf946e8626662bd3d5e65c3c7cbd06c887"
	I1017 19:43:52.299813  782063 cri.go:89] found id: "506f5ac682e5be1da5a5ba36fa52da915314fc50810783bf7bd35773b3730f41"
	I1017 19:43:52.299815  782063 cri.go:89] found id: "4933ad97f18077b0cfa66f7e6cbc74867dea0c8b55ad78ca6dccc0cac1d91a49"
	I1017 19:43:52.299818  782063 cri.go:89] found id: "901ddd13929fb6920b4881f0a5981c62221ac0353a5ea0b0595b97491426fe6d"
	I1017 19:43:52.299820  782063 cri.go:89] found id: "8df1c4c5ef0c73184c6ef8c075f8289830d708811be044c84f5a1ae516269398"
	I1017 19:43:52.299825  782063 cri.go:89] found id: "c497bea94d8c6edad854d6e668938a18cf8418cc85439cf5e1a69153d1e8609b"
	I1017 19:43:52.299828  782063 cri.go:89] found id: "77d4f52b9e17b76572890624c2496a017110f0f9e062d8447aa724c65828e7ac"
	I1017 19:43:52.299831  782063 cri.go:89] found id: ""
	I1017 19:43:52.299867  782063 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:43:52.313872  782063 retry.go:31] will retry after 266.25202ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:43:52Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:43:52.580334  782063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:43:52.596584  782063 pause.go:52] kubelet running: false
	I1017 19:43:52.596648  782063 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 19:43:52.761396  782063 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 19:43:52.761502  782063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 19:43:52.852586  782063 cri.go:89] found id: "2391fc7daf6f0c7ace1a6cb5b28f6b03f222c2106a49c1868f7146db0a965dd7"
	I1017 19:43:52.852608  782063 cri.go:89] found id: "92bd478433933f55dafbdd5f5569e0622bf677028ee7463d99447247a33cae6d"
	I1017 19:43:52.852614  782063 cri.go:89] found id: "912bc52145327590baf7aab5df00ca36437b09ad6e97370f085bd5d74f82ddcd"
	I1017 19:43:52.852619  782063 cri.go:89] found id: "592cdb02b1e2a86a61705ebc3560af11df6ef568e4e5c623da98e983c8a1cc61"
	I1017 19:43:52.852623  782063 cri.go:89] found id: "7cdc338f648b8ccfe127c89d4264cccf946e8626662bd3d5e65c3c7cbd06c887"
	I1017 19:43:52.852629  782063 cri.go:89] found id: "506f5ac682e5be1da5a5ba36fa52da915314fc50810783bf7bd35773b3730f41"
	I1017 19:43:52.852633  782063 cri.go:89] found id: "4933ad97f18077b0cfa66f7e6cbc74867dea0c8b55ad78ca6dccc0cac1d91a49"
	I1017 19:43:52.852637  782063 cri.go:89] found id: "901ddd13929fb6920b4881f0a5981c62221ac0353a5ea0b0595b97491426fe6d"
	I1017 19:43:52.852641  782063 cri.go:89] found id: "8df1c4c5ef0c73184c6ef8c075f8289830d708811be044c84f5a1ae516269398"
	I1017 19:43:52.852649  782063 cri.go:89] found id: "c497bea94d8c6edad854d6e668938a18cf8418cc85439cf5e1a69153d1e8609b"
	I1017 19:43:52.852654  782063 cri.go:89] found id: "77d4f52b9e17b76572890624c2496a017110f0f9e062d8447aa724c65828e7ac"
	I1017 19:43:52.852657  782063 cri.go:89] found id: ""
	I1017 19:43:52.852739  782063 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:43:52.869937  782063 retry.go:31] will retry after 424.246697ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:43:52Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:43:53.294573  782063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:43:53.310526  782063 pause.go:52] kubelet running: false
	I1017 19:43:53.310591  782063 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 19:43:53.491997  782063 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 19:43:53.492082  782063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 19:43:53.574329  782063 cri.go:89] found id: "2391fc7daf6f0c7ace1a6cb5b28f6b03f222c2106a49c1868f7146db0a965dd7"
	I1017 19:43:53.574368  782063 cri.go:89] found id: "92bd478433933f55dafbdd5f5569e0622bf677028ee7463d99447247a33cae6d"
	I1017 19:43:53.574373  782063 cri.go:89] found id: "912bc52145327590baf7aab5df00ca36437b09ad6e97370f085bd5d74f82ddcd"
	I1017 19:43:53.574376  782063 cri.go:89] found id: "592cdb02b1e2a86a61705ebc3560af11df6ef568e4e5c623da98e983c8a1cc61"
	I1017 19:43:53.574379  782063 cri.go:89] found id: "7cdc338f648b8ccfe127c89d4264cccf946e8626662bd3d5e65c3c7cbd06c887"
	I1017 19:43:53.574382  782063 cri.go:89] found id: "506f5ac682e5be1da5a5ba36fa52da915314fc50810783bf7bd35773b3730f41"
	I1017 19:43:53.574385  782063 cri.go:89] found id: "4933ad97f18077b0cfa66f7e6cbc74867dea0c8b55ad78ca6dccc0cac1d91a49"
	I1017 19:43:53.574388  782063 cri.go:89] found id: "901ddd13929fb6920b4881f0a5981c62221ac0353a5ea0b0595b97491426fe6d"
	I1017 19:43:53.574390  782063 cri.go:89] found id: "8df1c4c5ef0c73184c6ef8c075f8289830d708811be044c84f5a1ae516269398"
	I1017 19:43:53.574396  782063 cri.go:89] found id: "c497bea94d8c6edad854d6e668938a18cf8418cc85439cf5e1a69153d1e8609b"
	I1017 19:43:53.574399  782063 cri.go:89] found id: "77d4f52b9e17b76572890624c2496a017110f0f9e062d8447aa724c65828e7ac"
	I1017 19:43:53.574401  782063 cri.go:89] found id: ""
	I1017 19:43:53.574443  782063 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:43:53.589134  782063 retry.go:31] will retry after 320.570787ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:43:53Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:43:53.910799  782063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:43:53.926110  782063 pause.go:52] kubelet running: false
	I1017 19:43:53.926208  782063 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 19:43:54.117945  782063 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 19:43:54.118014  782063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 19:43:54.205305  782063 cri.go:89] found id: "2391fc7daf6f0c7ace1a6cb5b28f6b03f222c2106a49c1868f7146db0a965dd7"
	I1017 19:43:54.205352  782063 cri.go:89] found id: "92bd478433933f55dafbdd5f5569e0622bf677028ee7463d99447247a33cae6d"
	I1017 19:43:54.205368  782063 cri.go:89] found id: "912bc52145327590baf7aab5df00ca36437b09ad6e97370f085bd5d74f82ddcd"
	I1017 19:43:54.205373  782063 cri.go:89] found id: "592cdb02b1e2a86a61705ebc3560af11df6ef568e4e5c623da98e983c8a1cc61"
	I1017 19:43:54.205379  782063 cri.go:89] found id: "7cdc338f648b8ccfe127c89d4264cccf946e8626662bd3d5e65c3c7cbd06c887"
	I1017 19:43:54.205387  782063 cri.go:89] found id: "506f5ac682e5be1da5a5ba36fa52da915314fc50810783bf7bd35773b3730f41"
	I1017 19:43:54.205391  782063 cri.go:89] found id: "4933ad97f18077b0cfa66f7e6cbc74867dea0c8b55ad78ca6dccc0cac1d91a49"
	I1017 19:43:54.205396  782063 cri.go:89] found id: "901ddd13929fb6920b4881f0a5981c62221ac0353a5ea0b0595b97491426fe6d"
	I1017 19:43:54.205399  782063 cri.go:89] found id: "8df1c4c5ef0c73184c6ef8c075f8289830d708811be044c84f5a1ae516269398"
	I1017 19:43:54.205407  782063 cri.go:89] found id: "c497bea94d8c6edad854d6e668938a18cf8418cc85439cf5e1a69153d1e8609b"
	I1017 19:43:54.205412  782063 cri.go:89] found id: "77d4f52b9e17b76572890624c2496a017110f0f9e062d8447aa724c65828e7ac"
	I1017 19:43:54.205416  782063 cri.go:89] found id: ""
	I1017 19:43:54.205463  782063 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:43:54.229494  782063 out.go:203] 
	W1017 19:43:54.230975  782063 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:43:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:43:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:43:54.230994  782063 out.go:285] * 
	* 
	W1017 19:43:54.237828  782063 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:43:54.239101  782063 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-112878 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-112878
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-112878:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8097e3bd54ba555448dec314d1787c2226a13079be774ea5f55f5529ed22938b",
	        "Created": "2025-10-17T19:41:51.17407631Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 766846,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:42:55.841153678Z",
	            "FinishedAt": "2025-10-17T19:42:54.856556999Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/8097e3bd54ba555448dec314d1787c2226a13079be774ea5f55f5529ed22938b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8097e3bd54ba555448dec314d1787c2226a13079be774ea5f55f5529ed22938b/hostname",
	        "HostsPath": "/var/lib/docker/containers/8097e3bd54ba555448dec314d1787c2226a13079be774ea5f55f5529ed22938b/hosts",
	        "LogPath": "/var/lib/docker/containers/8097e3bd54ba555448dec314d1787c2226a13079be774ea5f55f5529ed22938b/8097e3bd54ba555448dec314d1787c2226a13079be774ea5f55f5529ed22938b-json.log",
	        "Name": "/default-k8s-diff-port-112878",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-112878:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-112878",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8097e3bd54ba555448dec314d1787c2226a13079be774ea5f55f5529ed22938b",
	                "LowerDir": "/var/lib/docker/overlay2/cff612883004ab32fa13a69dfcba6214c9fdd98d230080eef796d074286be5a8-init/diff:/var/lib/docker/overlay2/dbfb6a42e05d15debefb7c829b0dbabbe558b70da40f1ab4f30d27e0dda96088/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cff612883004ab32fa13a69dfcba6214c9fdd98d230080eef796d074286be5a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cff612883004ab32fa13a69dfcba6214c9fdd98d230080eef796d074286be5a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cff612883004ab32fa13a69dfcba6214c9fdd98d230080eef796d074286be5a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-112878",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-112878/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-112878",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-112878",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-112878",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dd4d1ba2380b240ad77d09946519a5271d147d4d9bc9568bac2ee215551c6b1b",
	            "SandboxKey": "/var/run/docker/netns/dd4d1ba2380b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-112878": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:4a:3b:bb:ab:6c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "120e96e993d1ce75e5b49ee5a2ece0f97836f04b7e2bb3daf297bdcc6e4a8079",
	                    "EndpointID": "6473d99e954665781b112159f23080e2cd26e735876785c078b87fbe54745c37",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-112878",
	                        "8097e3bd54ba"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-112878 -n default-k8s-diff-port-112878
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-112878 -n default-k8s-diff-port-112878: exit status 2 (391.253366ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-112878 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-112878 logs -n 25: (1.283012661s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                  │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-448344 sudo cat /etc/kubernetes/kubelet.conf                                                                                   │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo cat /var/lib/kubelet/config.yaml                                                                                   │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo systemctl status docker --all --full --no-pager                                                                    │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p enable-default-cni-448344 pgrep -a kubelet                                                                                          │ enable-default-cni-448344    │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo systemctl cat docker --no-pager                                                                                    │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo cat /etc/docker/daemon.json                                                                                        │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p auto-448344 sudo docker system info                                                                                                 │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p auto-448344 sudo systemctl status cri-docker --all --full --no-pager                                                                │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p auto-448344 sudo systemctl cat cri-docker --no-pager                                                                                │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                           │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p auto-448344 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                     │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo cri-dockerd --version                                                                                              │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo systemctl status containerd --all --full --no-pager                                                                │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p auto-448344 sudo systemctl cat containerd --no-pager                                                                                │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo cat /lib/systemd/system/containerd.service                                                                         │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo cat /etc/containerd/config.toml                                                                                    │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo containerd config dump                                                                                             │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo systemctl status crio --all --full --no-pager                                                                      │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo systemctl cat crio --no-pager                                                                                      │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo crio config                                                                                                        │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ delete  │ -p auto-448344                                                                                                                         │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ image   │ default-k8s-diff-port-112878 image list --format=json                                                                                  │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ pause   │ -p default-k8s-diff-port-112878 --alsologtostderr -v=1                                                                                 │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │                     │
	│ start   │ -p calico-448344 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio │ calico-448344                │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:43:54
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:43:54.315070  782802 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:43:54.315397  782802 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:43:54.315409  782802 out.go:374] Setting ErrFile to fd 2...
	I1017 19:43:54.315413  782802 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:43:54.315647  782802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:43:54.316278  782802 out.go:368] Setting JSON to false
	I1017 19:43:54.317726  782802 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12373,"bootTime":1760717861,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:43:54.317869  782802 start.go:141] virtualization: kvm guest
	I1017 19:43:54.320105  782802 out.go:179] * [calico-448344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:43:54.322196  782802 notify.go:220] Checking for updates...
	I1017 19:43:54.322239  782802 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:43:54.324373  782802 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:43:54.325971  782802 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:43:54.327612  782802 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 19:43:54.331516  782802 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:43:54.333007  782802 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:43:54.336866  782802 config.go:182] Loaded profile config "default-k8s-diff-port-112878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:43:54.336982  782802 config.go:182] Loaded profile config "enable-default-cni-448344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:43:54.337064  782802 config.go:182] Loaded profile config "flannel-448344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:43:54.337191  782802 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:43:54.369143  782802 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:43:54.369315  782802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:43:54.446465  782802 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 19:43:54.433861155 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:43:54.446575  782802 docker.go:318] overlay module found
	I1017 19:43:54.448499  782802 out.go:179] * Using the docker driver based on user configuration
	I1017 19:43:54.451883  782802 start.go:305] selected driver: docker
	I1017 19:43:54.451903  782802 start.go:925] validating driver "docker" against <nil>
	I1017 19:43:54.451917  782802 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:43:54.452528  782802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:43:54.522964  782802 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 19:43:54.508408454 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:43:54.523282  782802 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 19:43:54.523760  782802 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:43:54.526652  782802 out.go:179] * Using Docker driver with root privileges
	I1017 19:43:54.529549  782802 cni.go:84] Creating CNI manager for "calico"
	I1017 19:43:54.529581  782802 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1017 19:43:54.529710  782802 start.go:349] cluster config:
	{Name:calico-448344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-448344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:43:54.531745  782802 out.go:179] * Starting "calico-448344" primary control-plane node in "calico-448344" cluster
	I1017 19:43:54.533091  782802 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:43:54.535483  782802 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:43:54.537225  782802 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:43:54.537294  782802 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:43:54.537310  782802 cache.go:58] Caching tarball of preloaded images
	I1017 19:43:54.537348  782802 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:43:54.537434  782802 preload.go:233] Found /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:43:54.537445  782802 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:43:54.537608  782802 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/calico-448344/config.json ...
	I1017 19:43:54.537637  782802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/calico-448344/config.json: {Name:mk4f4e34c6020c86bfceab4f80996ef433eb9e61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:43:54.568474  782802 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:43:54.568585  782802 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:43:54.568662  782802 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:43:54.568757  782802 start.go:360] acquireMachinesLock for calico-448344: {Name:mk7e872c445e9eb17347ddd6685363069805e1af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:43:54.568991  782802 start.go:364] duration metric: took 149.817µs to acquireMachinesLock for "calico-448344"
	I1017 19:43:54.569027  782802 start.go:93] Provisioning new machine with config: &{Name:calico-448344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-448344 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:43:54.569274  782802 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 17 19:43:17 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:17.215164641Z" level=info msg="Created container 77d4f52b9e17b76572890624c2496a017110f0f9e062d8447aa724c65828e7ac: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hlrh4/kubernetes-dashboard" id=5103060b-7786-4fad-9675-916c2afd2966 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:43:17 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:17.216885091Z" level=info msg="Starting container: 77d4f52b9e17b76572890624c2496a017110f0f9e062d8447aa724c65828e7ac" id=89cd6103-9d07-47fd-a467-b18a56d70dc0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:43:17 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:17.219448288Z" level=info msg="Started container" PID=1716 containerID=77d4f52b9e17b76572890624c2496a017110f0f9e062d8447aa724c65828e7ac description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hlrh4/kubernetes-dashboard id=89cd6103-9d07-47fd-a467-b18a56d70dc0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4ee509c5dad41b7f7a086ed0a227d403d74450e0dc6cb4a4c210a9ed6f71823d
	Oct 17 19:43:36 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:36.901876155Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d60eba8f-10bf-45d0-bc01-0b2b62436b97 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:43:36 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:36.902843078Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ccce9042-70e2-44f6-9a5a-339ab8e45e5c name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:43:36 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:36.903869561Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw/dashboard-metrics-scraper" id=3181c26f-0240-4697-bfcd-0d2e6951e903 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:43:36 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:36.904163277Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:36 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:36.909821379Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:36 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:36.910617322Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:36 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:36.938571865Z" level=info msg="Created container c497bea94d8c6edad854d6e668938a18cf8418cc85439cf5e1a69153d1e8609b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw/dashboard-metrics-scraper" id=3181c26f-0240-4697-bfcd-0d2e6951e903 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:43:36 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:36.939380838Z" level=info msg="Starting container: c497bea94d8c6edad854d6e668938a18cf8418cc85439cf5e1a69153d1e8609b" id=dbdfa257-c3c6-4e17-8a6b-243bba0aa471 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:43:36 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:36.94144018Z" level=info msg="Started container" PID=1738 containerID=c497bea94d8c6edad854d6e668938a18cf8418cc85439cf5e1a69153d1e8609b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw/dashboard-metrics-scraper id=dbdfa257-c3c6-4e17-8a6b-243bba0aa471 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f22ed893aad69746920d714c825dc88f380e1a7f2efae468600b9a6576645098
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.068385519Z" level=info msg="Removing container: f4466c8b5196f1e504186902977f650b45480aea237793e6079bbe998f5419b6" id=fb556d6a-af70-4010-90fe-446e2aad3d90 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.069653548Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2302eb84-cf2c-4a81-8b30-c2d401f60b07 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.072353133Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8b1095e6-767d-44e2-9bfa-6ddeedd5fb90 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.073954423Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=785efc66-c710-4215-a5f0-ec8be079b0ea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.074373295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.081676122Z" level=info msg="Removed container f4466c8b5196f1e504186902977f650b45480aea237793e6079bbe998f5419b6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw/dashboard-metrics-scraper" id=fb556d6a-af70-4010-90fe-446e2aad3d90 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.081867665Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.082083535Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/046715f1378cc93b62ea0a86f33935d4edefcc497c43d70e7aff9edfb68ae906/merged/etc/passwd: no such file or directory"
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.082123852Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/046715f1378cc93b62ea0a86f33935d4edefcc497c43d70e7aff9edfb68ae906/merged/etc/group: no such file or directory"
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.082457793Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.10547901Z" level=info msg="Created container 2391fc7daf6f0c7ace1a6cb5b28f6b03f222c2106a49c1868f7146db0a965dd7: kube-system/storage-provisioner/storage-provisioner" id=785efc66-c710-4215-a5f0-ec8be079b0ea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.106202128Z" level=info msg="Starting container: 2391fc7daf6f0c7ace1a6cb5b28f6b03f222c2106a49c1868f7146db0a965dd7" id=3b227baa-45a1-4db4-9251-1806a0fd5aa7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.108206057Z" level=info msg="Started container" PID=1748 containerID=2391fc7daf6f0c7ace1a6cb5b28f6b03f222c2106a49c1868f7146db0a965dd7 description=kube-system/storage-provisioner/storage-provisioner id=3b227baa-45a1-4db4-9251-1806a0fd5aa7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e6920fdc28745f4db79b28e4cf319c4937a45372c39e34aa2494443e1c983b5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	2391fc7daf6f0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   3e6920fdc2874       storage-provisioner                                    kube-system
	c497bea94d8c6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   f22ed893aad69       dashboard-metrics-scraper-6ffb444bf9-k8cfw             kubernetes-dashboard
	77d4f52b9e17b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago      Running             kubernetes-dashboard        0                   4ee509c5dad41       kubernetes-dashboard-855c9754f9-hlrh4                  kubernetes-dashboard
	92bd478433933       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   8690a151f1dbf       coredns-66bc5c9577-vckxk                               kube-system
	bc6bb7556a0a9       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   5ae48466036c4       busybox                                                default
	912bc52145327       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           49 seconds ago      Running             kube-proxy                  0                   d8ce64be66b16       kube-proxy-d2jpw                                       kube-system
	592cdb02b1e2a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   3e6920fdc2874       storage-provisioner                                    kube-system
	7cdc338f648b8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   07a6f7d8a2bf3       kindnet-xvc9b                                          kube-system
	506f5ac682e5b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           52 seconds ago      Running             kube-controller-manager     0                   d1d9127bfd0eb       kube-controller-manager-default-k8s-diff-port-112878   kube-system
	4933ad97f1807       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           52 seconds ago      Running             kube-apiserver              0                   b84526a530ced       kube-apiserver-default-k8s-diff-port-112878            kube-system
	901ddd13929fb       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           52 seconds ago      Running             kube-scheduler              0                   7575571859ea1       kube-scheduler-default-k8s-diff-port-112878            kube-system
	8df1c4c5ef0c7       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           52 seconds ago      Running             etcd                        0                   4317f8225eb57       etcd-default-k8s-diff-port-112878                      kube-system
	
	
	==> coredns [92bd478433933f55dafbdd5f5569e0622bf677028ee7463d99447247a33cae6d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55128 - 9923 "HINFO IN 763171980426719819.3724985484267218911. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.087807694s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-112878
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-112878
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=default-k8s-diff-port-112878
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_42_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:42:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-112878
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:43:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:43:36 +0000   Fri, 17 Oct 2025 19:42:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:43:36 +0000   Fri, 17 Oct 2025 19:42:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:43:36 +0000   Fri, 17 Oct 2025 19:42:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:43:36 +0000   Fri, 17 Oct 2025 19:42:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-112878
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                d9945229-8d2c-480b-8ce0-8f084b03705d
	  Boot ID:                    c8616e78-d085-41cd-a329-f2bcfd9cfa15
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-vckxk                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-default-k8s-diff-port-112878                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-xvc9b                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-default-k8s-diff-port-112878             250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-112878    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-d2jpw                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-default-k8s-diff-port-112878             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-k8cfw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hlrh4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 102s               kube-proxy       
	  Normal  Starting                 48s                kube-proxy       
	  Normal  NodeHasSufficientMemory  109s               kubelet          Node default-k8s-diff-port-112878 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s               kubelet          Node default-k8s-diff-port-112878 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s               kubelet          Node default-k8s-diff-port-112878 status is now: NodeHasSufficientPID
	  Normal  Starting                 109s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s               node-controller  Node default-k8s-diff-port-112878 event: Registered Node default-k8s-diff-port-112878 in Controller
	  Normal  NodeReady                92s                kubelet          Node default-k8s-diff-port-112878 status is now: NodeReady
	  Normal  Starting                 53s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 53s)  kubelet          Node default-k8s-diff-port-112878 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 53s)  kubelet          Node default-k8s-diff-port-112878 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 53s)  kubelet          Node default-k8s-diff-port-112878 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                node-controller  Node default-k8s-diff-port-112878 event: Registered Node default-k8s-diff-port-112878 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.024898] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +2.047801] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +4.031525] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:00] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +16.382262] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +32.252567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee e4 05 02 02 de 08 06
	[  +0.011274] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 4e 5e a6 cc 79 08 06
	[ +42.965565] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 34 84 5a d2 5b 08 06
	[  +0.002282] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee e4 05 02 02 de 08 06
	
	
	==> etcd [8df1c4c5ef0c73184c6ef8c075f8289830d708811be044c84f5a1ae516269398] <==
	{"level":"warn","ts":"2025-10-17T19:43:04.758668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.767449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.778818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.787991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.800750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.809873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.817633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.826001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.834581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.844285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.854210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.861074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.871903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.883134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.892245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.908459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.917061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.925408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.933981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.944303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.952403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.964985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.972309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.980660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:05.060739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51176","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:43:55 up  3:26,  0 user,  load average: 4.12, 3.49, 2.31
	Linux default-k8s-diff-port-112878 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7cdc338f648b8ccfe127c89d4264cccf946e8626662bd3d5e65c3c7cbd06c887] <==
	I1017 19:43:06.507259       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 19:43:06.507519       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 19:43:06.507764       1 main.go:148] setting mtu 1500 for CNI 
	I1017 19:43:06.507843       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 19:43:06.507890       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T19:43:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 19:43:06.805162       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 19:43:06.805209       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 19:43:06.805220       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:43:06.805345       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 19:43:07.006977       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:43:07.007810       1 metrics.go:72] Registering metrics
	I1017 19:43:07.007947       1 controller.go:711] "Syncing nftables rules"
	I1017 19:43:16.804866       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:43:16.804961       1 main.go:301] handling current node
	I1017 19:43:26.808766       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:43:26.808799       1 main.go:301] handling current node
	I1017 19:43:36.805010       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:43:36.805044       1 main.go:301] handling current node
	I1017 19:43:46.805816       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:43:46.805872       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4933ad97f18077b0cfa66f7e6cbc74867dea0c8b55ad78ca6dccc0cac1d91a49] <==
	I1017 19:43:05.628001       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1017 19:43:05.628821       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 19:43:05.629031       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 19:43:05.627578       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1017 19:43:05.629277       1 aggregator.go:171] initial CRD sync complete...
	I1017 19:43:05.629333       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 19:43:05.629340       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 19:43:05.629346       1 cache.go:39] Caches are synced for autoregister controller
	E1017 19:43:05.636753       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 19:43:05.637151       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 19:43:05.653324       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 19:43:05.658563       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 19:43:05.658609       1 policy_source.go:240] refreshing policies
	I1017 19:43:05.669970       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 19:43:05.986016       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 19:43:05.994550       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 19:43:06.067769       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 19:43:06.126841       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 19:43:06.153332       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 19:43:06.252215       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.168.81"}
	I1017 19:43:06.279258       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.26.243"}
	I1017 19:43:06.531077       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:43:09.373930       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 19:43:09.422209       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:43:09.524404       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [506f5ac682e5be1da5a5ba36fa52da915314fc50810783bf7bd35773b3730f41] <==
	I1017 19:43:08.968529       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 19:43:08.968560       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 19:43:08.968790       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 19:43:08.968926       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 19:43:08.969383       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 19:43:08.970643       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 19:43:08.972722       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1017 19:43:08.974945       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 19:43:08.974973       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 19:43:08.975011       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 19:43:08.975040       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 19:43:08.975049       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 19:43:08.975055       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 19:43:08.975039       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:43:08.976333       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 19:43:08.976354       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 19:43:08.978574       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 19:43:08.980803       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 19:43:08.983076       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 19:43:08.984746       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 19:43:08.987098       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 19:43:08.987111       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:43:08.992410       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:43:08.992439       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 19:43:08.992452       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [912bc52145327590baf7aab5df00ca36437b09ad6e97370f085bd5d74f82ddcd] <==
	I1017 19:43:06.410238       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:43:06.479016       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:43:06.579159       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:43:06.579193       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1017 19:43:06.579298       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:43:06.605559       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:43:06.605721       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:43:06.612154       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:43:06.612858       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:43:06.612909       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:43:06.615987       1 config.go:309] "Starting node config controller"
	I1017 19:43:06.616033       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:43:06.616042       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:43:06.616494       1 config.go:200] "Starting service config controller"
	I1017 19:43:06.616545       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:43:06.616505       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:43:06.616663       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:43:06.616832       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:43:06.617153       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:43:06.617901       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 19:43:06.716850       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:43:06.716866       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [901ddd13929fb6920b4881f0a5981c62221ac0353a5ea0b0595b97491426fe6d] <==
	I1017 19:43:04.308199       1 serving.go:386] Generated self-signed cert in-memory
	W1017 19:43:05.562902       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 19:43:05.562997       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 19:43:05.563011       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 19:43:05.563021       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 19:43:05.596109       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 19:43:05.596143       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:43:05.599466       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 19:43:05.599651       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:43:05.600973       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:43:05.599675       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 19:43:05.701577       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:43:09 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:09.593047     707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/faf76b76-7636-40eb-98aa-e9ef5eb101bc-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-hlrh4\" (UID: \"faf76b76-7636-40eb-98aa-e9ef5eb101bc\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hlrh4"
	Oct 17 19:43:09 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:09.593128     707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fcdd7205-ad74-4eb3-addd-cfcf1e35074e-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-k8cfw\" (UID: \"fcdd7205-ad74-4eb3-addd-cfcf1e35074e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw"
	Oct 17 19:43:09 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:09.593181     707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnt5m\" (UniqueName: \"kubernetes.io/projected/fcdd7205-ad74-4eb3-addd-cfcf1e35074e-kube-api-access-cnt5m\") pod \"dashboard-metrics-scraper-6ffb444bf9-k8cfw\" (UID: \"fcdd7205-ad74-4eb3-addd-cfcf1e35074e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw"
	Oct 17 19:43:09 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:09.593225     707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4qdl\" (UniqueName: \"kubernetes.io/projected/faf76b76-7636-40eb-98aa-e9ef5eb101bc-kube-api-access-r4qdl\") pod \"kubernetes-dashboard-855c9754f9-hlrh4\" (UID: \"faf76b76-7636-40eb-98aa-e9ef5eb101bc\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hlrh4"
	Oct 17 19:43:12 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:12.998523     707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw" podStartSLOduration=0.89919793 podStartE2EDuration="3.998498021s" podCreationTimestamp="2025-10-17 19:43:09 +0000 UTC" firstStartedPulling="2025-10-17 19:43:09.827955133 +0000 UTC m=+7.047445200" lastFinishedPulling="2025-10-17 19:43:12.92725522 +0000 UTC m=+10.146745291" observedRunningTime="2025-10-17 19:43:12.998139238 +0000 UTC m=+10.217629334" watchObservedRunningTime="2025-10-17 19:43:12.998498021 +0000 UTC m=+10.217988097"
	Oct 17 19:43:13 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:13.992356     707 scope.go:117] "RemoveContainer" containerID="6fa3bb54b9ac5d51983aa653d72d4e0026f4e5c7f8229a1585b5d6570f9572a7"
	Oct 17 19:43:15 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:15.000644     707 scope.go:117] "RemoveContainer" containerID="f4466c8b5196f1e504186902977f650b45480aea237793e6079bbe998f5419b6"
	Oct 17 19:43:15 default-k8s-diff-port-112878 kubelet[707]: E1017 19:43:15.000819     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k8cfw_kubernetes-dashboard(fcdd7205-ad74-4eb3-addd-cfcf1e35074e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw" podUID="fcdd7205-ad74-4eb3-addd-cfcf1e35074e"
	Oct 17 19:43:15 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:15.002784     707 scope.go:117] "RemoveContainer" containerID="6fa3bb54b9ac5d51983aa653d72d4e0026f4e5c7f8229a1585b5d6570f9572a7"
	Oct 17 19:43:16 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:16.007203     707 scope.go:117] "RemoveContainer" containerID="f4466c8b5196f1e504186902977f650b45480aea237793e6079bbe998f5419b6"
	Oct 17 19:43:16 default-k8s-diff-port-112878 kubelet[707]: E1017 19:43:16.007452     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k8cfw_kubernetes-dashboard(fcdd7205-ad74-4eb3-addd-cfcf1e35074e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw" podUID="fcdd7205-ad74-4eb3-addd-cfcf1e35074e"
	Oct 17 19:43:18 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:18.213560     707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hlrh4" podStartSLOduration=1.877752512 podStartE2EDuration="9.213533988s" podCreationTimestamp="2025-10-17 19:43:09 +0000 UTC" firstStartedPulling="2025-10-17 19:43:09.828000431 +0000 UTC m=+7.047490486" lastFinishedPulling="2025-10-17 19:43:17.163781889 +0000 UTC m=+14.383271962" observedRunningTime="2025-10-17 19:43:18.025591235 +0000 UTC m=+15.245081307" watchObservedRunningTime="2025-10-17 19:43:18.213533988 +0000 UTC m=+15.433024063"
	Oct 17 19:43:23 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:23.696389     707 scope.go:117] "RemoveContainer" containerID="f4466c8b5196f1e504186902977f650b45480aea237793e6079bbe998f5419b6"
	Oct 17 19:43:23 default-k8s-diff-port-112878 kubelet[707]: E1017 19:43:23.696638     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k8cfw_kubernetes-dashboard(fcdd7205-ad74-4eb3-addd-cfcf1e35074e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw" podUID="fcdd7205-ad74-4eb3-addd-cfcf1e35074e"
	Oct 17 19:43:36 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:36.901258     707 scope.go:117] "RemoveContainer" containerID="f4466c8b5196f1e504186902977f650b45480aea237793e6079bbe998f5419b6"
	Oct 17 19:43:37 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:37.066968     707 scope.go:117] "RemoveContainer" containerID="f4466c8b5196f1e504186902977f650b45480aea237793e6079bbe998f5419b6"
	Oct 17 19:43:37 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:37.067248     707 scope.go:117] "RemoveContainer" containerID="c497bea94d8c6edad854d6e668938a18cf8418cc85439cf5e1a69153d1e8609b"
	Oct 17 19:43:37 default-k8s-diff-port-112878 kubelet[707]: E1017 19:43:37.067486     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k8cfw_kubernetes-dashboard(fcdd7205-ad74-4eb3-addd-cfcf1e35074e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw" podUID="fcdd7205-ad74-4eb3-addd-cfcf1e35074e"
	Oct 17 19:43:37 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:37.069252     707 scope.go:117] "RemoveContainer" containerID="592cdb02b1e2a86a61705ebc3560af11df6ef568e4e5c623da98e983c8a1cc61"
	Oct 17 19:43:43 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:43.696816     707 scope.go:117] "RemoveContainer" containerID="c497bea94d8c6edad854d6e668938a18cf8418cc85439cf5e1a69153d1e8609b"
	Oct 17 19:43:43 default-k8s-diff-port-112878 kubelet[707]: E1017 19:43:43.697052     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k8cfw_kubernetes-dashboard(fcdd7205-ad74-4eb3-addd-cfcf1e35074e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw" podUID="fcdd7205-ad74-4eb3-addd-cfcf1e35074e"
	Oct 17 19:43:52 default-k8s-diff-port-112878 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 19:43:52 default-k8s-diff-port-112878 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 19:43:52 default-k8s-diff-port-112878 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 17 19:43:52 default-k8s-diff-port-112878 systemd[1]: kubelet.service: Consumed 1.714s CPU time.
	
	
	==> kubernetes-dashboard [77d4f52b9e17b76572890624c2496a017110f0f9e062d8447aa724c65828e7ac] <==
	2025/10/17 19:43:17 Using namespace: kubernetes-dashboard
	2025/10/17 19:43:17 Using in-cluster config to connect to apiserver
	2025/10/17 19:43:17 Using secret token for csrf signing
	2025/10/17 19:43:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 19:43:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 19:43:17 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 19:43:17 Generating JWE encryption key
	2025/10/17 19:43:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 19:43:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 19:43:17 Initializing JWE encryption key from synchronized object
	2025/10/17 19:43:17 Creating in-cluster Sidecar client
	2025/10/17 19:43:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 19:43:17 Serving insecurely on HTTP port: 9090
	2025/10/17 19:43:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 19:43:17 Starting overwatch
	
	
	==> storage-provisioner [2391fc7daf6f0c7ace1a6cb5b28f6b03f222c2106a49c1868f7146db0a965dd7] <==
	I1017 19:43:37.120018       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 19:43:37.128160       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 19:43:37.128203       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 19:43:37.130726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:43:40.586986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:43:44.849061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:43:48.448276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:43:51.503057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:43:54.526817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:43:54.531957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 19:43:54.532145       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 19:43:54.532347       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f6462b38-005f-4c92-8d22-eea640034e0b", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-112878_e5f4f184-b609-4561-a915-75d847001cc2 became leader
	I1017 19:43:54.532857       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-112878_e5f4f184-b609-4561-a915-75d847001cc2!
	W1017 19:43:54.537951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:43:54.543913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 19:43:54.634023       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-112878_e5f4f184-b609-4561-a915-75d847001cc2!
	
	
	==> storage-provisioner [592cdb02b1e2a86a61705ebc3560af11df6ef568e4e5c623da98e983c8a1cc61] <==
	I1017 19:43:06.359811       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 19:43:36.362159       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-112878 -n default-k8s-diff-port-112878
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-112878 -n default-k8s-diff-port-112878: exit status 2 (357.403822ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-112878 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-112878
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-112878:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8097e3bd54ba555448dec314d1787c2226a13079be774ea5f55f5529ed22938b",
	        "Created": "2025-10-17T19:41:51.17407631Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 766846,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:42:55.841153678Z",
	            "FinishedAt": "2025-10-17T19:42:54.856556999Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/8097e3bd54ba555448dec314d1787c2226a13079be774ea5f55f5529ed22938b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8097e3bd54ba555448dec314d1787c2226a13079be774ea5f55f5529ed22938b/hostname",
	        "HostsPath": "/var/lib/docker/containers/8097e3bd54ba555448dec314d1787c2226a13079be774ea5f55f5529ed22938b/hosts",
	        "LogPath": "/var/lib/docker/containers/8097e3bd54ba555448dec314d1787c2226a13079be774ea5f55f5529ed22938b/8097e3bd54ba555448dec314d1787c2226a13079be774ea5f55f5529ed22938b-json.log",
	        "Name": "/default-k8s-diff-port-112878",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-112878:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-112878",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8097e3bd54ba555448dec314d1787c2226a13079be774ea5f55f5529ed22938b",
	                "LowerDir": "/var/lib/docker/overlay2/cff612883004ab32fa13a69dfcba6214c9fdd98d230080eef796d074286be5a8-init/diff:/var/lib/docker/overlay2/dbfb6a42e05d15debefb7c829b0dbabbe558b70da40f1ab4f30d27e0dda96088/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cff612883004ab32fa13a69dfcba6214c9fdd98d230080eef796d074286be5a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cff612883004ab32fa13a69dfcba6214c9fdd98d230080eef796d074286be5a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cff612883004ab32fa13a69dfcba6214c9fdd98d230080eef796d074286be5a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-112878",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-112878/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-112878",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-112878",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-112878",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dd4d1ba2380b240ad77d09946519a5271d147d4d9bc9568bac2ee215551c6b1b",
	            "SandboxKey": "/var/run/docker/netns/dd4d1ba2380b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-112878": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:4a:3b:bb:ab:6c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "120e96e993d1ce75e5b49ee5a2ece0f97836f04b7e2bb3daf297bdcc6e4a8079",
	                    "EndpointID": "6473d99e954665781b112159f23080e2cd26e735876785c078b87fbe54745c37",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-112878",
	                        "8097e3bd54ba"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-112878 -n default-k8s-diff-port-112878
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-112878 -n default-k8s-diff-port-112878: exit status 2 (374.317799ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-112878 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-112878 logs -n 25: (3.374433469s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                  │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-448344 sudo cat /etc/kubernetes/kubelet.conf                                                                                   │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo cat /var/lib/kubelet/config.yaml                                                                                   │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo systemctl status docker --all --full --no-pager                                                                    │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p enable-default-cni-448344 pgrep -a kubelet                                                                                          │ enable-default-cni-448344    │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo systemctl cat docker --no-pager                                                                                    │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo cat /etc/docker/daemon.json                                                                                        │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p auto-448344 sudo docker system info                                                                                                 │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p auto-448344 sudo systemctl status cri-docker --all --full --no-pager                                                                │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p auto-448344 sudo systemctl cat cri-docker --no-pager                                                                                │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                           │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p auto-448344 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                     │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo cri-dockerd --version                                                                                              │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo systemctl status containerd --all --full --no-pager                                                                │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p auto-448344 sudo systemctl cat containerd --no-pager                                                                                │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo cat /lib/systemd/system/containerd.service                                                                         │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo cat /etc/containerd/config.toml                                                                                    │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo containerd config dump                                                                                             │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo systemctl status crio --all --full --no-pager                                                                      │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo systemctl cat crio --no-pager                                                                                      │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-448344 sudo crio config                                                                                                        │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ delete  │ -p auto-448344                                                                                                                         │ auto-448344                  │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ image   │ default-k8s-diff-port-112878 image list --format=json                                                                                  │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │ 17 Oct 25 19:43 UTC │
	│ pause   │ -p default-k8s-diff-port-112878 --alsologtostderr -v=1                                                                                 │ default-k8s-diff-port-112878 │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │                     │
	│ start   │ -p calico-448344 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio │ calico-448344                │ jenkins │ v1.37.0 │ 17 Oct 25 19:43 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:43:54
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:43:54.315070  782802 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:43:54.315397  782802 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:43:54.315409  782802 out.go:374] Setting ErrFile to fd 2...
	I1017 19:43:54.315413  782802 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:43:54.315647  782802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:43:54.316278  782802 out.go:368] Setting JSON to false
	I1017 19:43:54.317726  782802 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12373,"bootTime":1760717861,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:43:54.317869  782802 start.go:141] virtualization: kvm guest
	I1017 19:43:54.320105  782802 out.go:179] * [calico-448344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:43:54.322196  782802 notify.go:220] Checking for updates...
	I1017 19:43:54.322239  782802 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:43:54.324373  782802 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:43:54.325971  782802 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:43:54.327612  782802 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 19:43:54.331516  782802 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:43:54.333007  782802 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:43:54.336866  782802 config.go:182] Loaded profile config "default-k8s-diff-port-112878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:43:54.336982  782802 config.go:182] Loaded profile config "enable-default-cni-448344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:43:54.337064  782802 config.go:182] Loaded profile config "flannel-448344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:43:54.337191  782802 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:43:54.369143  782802 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:43:54.369315  782802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:43:54.446465  782802 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 19:43:54.433861155 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:43:54.446575  782802 docker.go:318] overlay module found
	I1017 19:43:54.448499  782802 out.go:179] * Using the docker driver based on user configuration
	I1017 19:43:54.451883  782802 start.go:305] selected driver: docker
	I1017 19:43:54.451903  782802 start.go:925] validating driver "docker" against <nil>
	I1017 19:43:54.451917  782802 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:43:54.452528  782802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:43:54.522964  782802 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 19:43:54.508408454 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:43:54.523282  782802 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 19:43:54.523760  782802 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:43:54.526652  782802 out.go:179] * Using Docker driver with root privileges
	I1017 19:43:54.529549  782802 cni.go:84] Creating CNI manager for "calico"
	I1017 19:43:54.529581  782802 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1017 19:43:54.529710  782802 start.go:349] cluster config:
	{Name:calico-448344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-448344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:43:54.531745  782802 out.go:179] * Starting "calico-448344" primary control-plane node in "calico-448344" cluster
	I1017 19:43:54.533091  782802 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:43:54.535483  782802 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:43:54.537225  782802 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:43:54.537294  782802 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:43:54.537310  782802 cache.go:58] Caching tarball of preloaded images
	I1017 19:43:54.537348  782802 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:43:54.537434  782802 preload.go:233] Found /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:43:54.537445  782802 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:43:54.537608  782802 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/calico-448344/config.json ...
	I1017 19:43:54.537637  782802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/calico-448344/config.json: {Name:mk4f4e34c6020c86bfceab4f80996ef433eb9e61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:43:54.568474  782802 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:43:54.568585  782802 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:43:54.568662  782802 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:43:54.568757  782802 start.go:360] acquireMachinesLock for calico-448344: {Name:mk7e872c445e9eb17347ddd6685363069805e1af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:43:54.568991  782802 start.go:364] duration metric: took 149.817µs to acquireMachinesLock for "calico-448344"
	I1017 19:43:54.569027  782802 start.go:93] Provisioning new machine with config: &{Name:calico-448344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-448344 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:43:54.569274  782802 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 17 19:43:17 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:17.215164641Z" level=info msg="Created container 77d4f52b9e17b76572890624c2496a017110f0f9e062d8447aa724c65828e7ac: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hlrh4/kubernetes-dashboard" id=5103060b-7786-4fad-9675-916c2afd2966 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:43:17 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:17.216885091Z" level=info msg="Starting container: 77d4f52b9e17b76572890624c2496a017110f0f9e062d8447aa724c65828e7ac" id=89cd6103-9d07-47fd-a467-b18a56d70dc0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:43:17 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:17.219448288Z" level=info msg="Started container" PID=1716 containerID=77d4f52b9e17b76572890624c2496a017110f0f9e062d8447aa724c65828e7ac description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hlrh4/kubernetes-dashboard id=89cd6103-9d07-47fd-a467-b18a56d70dc0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4ee509c5dad41b7f7a086ed0a227d403d74450e0dc6cb4a4c210a9ed6f71823d
	Oct 17 19:43:36 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:36.901876155Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d60eba8f-10bf-45d0-bc01-0b2b62436b97 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:43:36 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:36.902843078Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ccce9042-70e2-44f6-9a5a-339ab8e45e5c name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:43:36 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:36.903869561Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw/dashboard-metrics-scraper" id=3181c26f-0240-4697-bfcd-0d2e6951e903 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:43:36 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:36.904163277Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:36 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:36.909821379Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:36 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:36.910617322Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:36 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:36.938571865Z" level=info msg="Created container c497bea94d8c6edad854d6e668938a18cf8418cc85439cf5e1a69153d1e8609b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw/dashboard-metrics-scraper" id=3181c26f-0240-4697-bfcd-0d2e6951e903 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:43:36 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:36.939380838Z" level=info msg="Starting container: c497bea94d8c6edad854d6e668938a18cf8418cc85439cf5e1a69153d1e8609b" id=dbdfa257-c3c6-4e17-8a6b-243bba0aa471 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:43:36 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:36.94144018Z" level=info msg="Started container" PID=1738 containerID=c497bea94d8c6edad854d6e668938a18cf8418cc85439cf5e1a69153d1e8609b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw/dashboard-metrics-scraper id=dbdfa257-c3c6-4e17-8a6b-243bba0aa471 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f22ed893aad69746920d714c825dc88f380e1a7f2efae468600b9a6576645098
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.068385519Z" level=info msg="Removing container: f4466c8b5196f1e504186902977f650b45480aea237793e6079bbe998f5419b6" id=fb556d6a-af70-4010-90fe-446e2aad3d90 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.069653548Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2302eb84-cf2c-4a81-8b30-c2d401f60b07 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.072353133Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8b1095e6-767d-44e2-9bfa-6ddeedd5fb90 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.073954423Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=785efc66-c710-4215-a5f0-ec8be079b0ea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.074373295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.081676122Z" level=info msg="Removed container f4466c8b5196f1e504186902977f650b45480aea237793e6079bbe998f5419b6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw/dashboard-metrics-scraper" id=fb556d6a-af70-4010-90fe-446e2aad3d90 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.081867665Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.082083535Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/046715f1378cc93b62ea0a86f33935d4edefcc497c43d70e7aff9edfb68ae906/merged/etc/passwd: no such file or directory"
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.082123852Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/046715f1378cc93b62ea0a86f33935d4edefcc497c43d70e7aff9edfb68ae906/merged/etc/group: no such file or directory"
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.082457793Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.10547901Z" level=info msg="Created container 2391fc7daf6f0c7ace1a6cb5b28f6b03f222c2106a49c1868f7146db0a965dd7: kube-system/storage-provisioner/storage-provisioner" id=785efc66-c710-4215-a5f0-ec8be079b0ea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.106202128Z" level=info msg="Starting container: 2391fc7daf6f0c7ace1a6cb5b28f6b03f222c2106a49c1868f7146db0a965dd7" id=3b227baa-45a1-4db4-9251-1806a0fd5aa7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:43:37 default-k8s-diff-port-112878 crio[556]: time="2025-10-17T19:43:37.108206057Z" level=info msg="Started container" PID=1748 containerID=2391fc7daf6f0c7ace1a6cb5b28f6b03f222c2106a49c1868f7146db0a965dd7 description=kube-system/storage-provisioner/storage-provisioner id=3b227baa-45a1-4db4-9251-1806a0fd5aa7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e6920fdc28745f4db79b28e4cf319c4937a45372c39e34aa2494443e1c983b5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	2391fc7daf6f0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   3e6920fdc2874       storage-provisioner                                    kube-system
	c497bea94d8c6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   f22ed893aad69       dashboard-metrics-scraper-6ffb444bf9-k8cfw             kubernetes-dashboard
	77d4f52b9e17b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   4ee509c5dad41       kubernetes-dashboard-855c9754f9-hlrh4                  kubernetes-dashboard
	92bd478433933       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   8690a151f1dbf       coredns-66bc5c9577-vckxk                               kube-system
	bc6bb7556a0a9       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   5ae48466036c4       busybox                                                default
	912bc52145327       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           51 seconds ago      Running             kube-proxy                  0                   d8ce64be66b16       kube-proxy-d2jpw                                       kube-system
	592cdb02b1e2a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   3e6920fdc2874       storage-provisioner                                    kube-system
	7cdc338f648b8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   07a6f7d8a2bf3       kindnet-xvc9b                                          kube-system
	506f5ac682e5b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           54 seconds ago      Running             kube-controller-manager     0                   d1d9127bfd0eb       kube-controller-manager-default-k8s-diff-port-112878   kube-system
	4933ad97f1807       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           54 seconds ago      Running             kube-apiserver              0                   b84526a530ced       kube-apiserver-default-k8s-diff-port-112878            kube-system
	901ddd13929fb       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           54 seconds ago      Running             kube-scheduler              0                   7575571859ea1       kube-scheduler-default-k8s-diff-port-112878            kube-system
	8df1c4c5ef0c7       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           54 seconds ago      Running             etcd                        0                   4317f8225eb57       etcd-default-k8s-diff-port-112878                      kube-system
	
	
	==> coredns [92bd478433933f55dafbdd5f5569e0622bf677028ee7463d99447247a33cae6d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55128 - 9923 "HINFO IN 763171980426719819.3724985484267218911. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.087807694s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-112878
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-112878
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=default-k8s-diff-port-112878
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_42_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:42:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-112878
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:43:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:43:36 +0000   Fri, 17 Oct 2025 19:42:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:43:36 +0000   Fri, 17 Oct 2025 19:42:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:43:36 +0000   Fri, 17 Oct 2025 19:42:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:43:36 +0000   Fri, 17 Oct 2025 19:42:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-112878
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                d9945229-8d2c-480b-8ce0-8f084b03705d
	  Boot ID:                    c8616e78-d085-41cd-a329-f2bcfd9cfa15
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-vckxk                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-default-k8s-diff-port-112878                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-xvc9b                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-default-k8s-diff-port-112878             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-112878    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-d2jpw                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-default-k8s-diff-port-112878             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-k8cfw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hlrh4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  111s               kubelet          Node default-k8s-diff-port-112878 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s               kubelet          Node default-k8s-diff-port-112878 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s               kubelet          Node default-k8s-diff-port-112878 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s               node-controller  Node default-k8s-diff-port-112878 event: Registered Node default-k8s-diff-port-112878 in Controller
	  Normal  NodeReady                94s                kubelet          Node default-k8s-diff-port-112878 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 55s)  kubelet          Node default-k8s-diff-port-112878 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 55s)  kubelet          Node default-k8s-diff-port-112878 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 55s)  kubelet          Node default-k8s-diff-port-112878 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node default-k8s-diff-port-112878 event: Registered Node default-k8s-diff-port-112878 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.024898] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.023862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +1.022907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +2.047801] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[  +4.031525] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:00] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +16.382262] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[ +32.252567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 9f 3f 9f 9e 00 9e 84 d1 61 fb f3 08 00
	[Oct17 19:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee e4 05 02 02 de 08 06
	[  +0.011274] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 4e 5e a6 cc 79 08 06
	[ +42.965565] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 34 84 5a d2 5b 08 06
	[  +0.002282] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee e4 05 02 02 de 08 06
	
	
	==> etcd [8df1c4c5ef0c73184c6ef8c075f8289830d708811be044c84f5a1ae516269398] <==
	{"level":"warn","ts":"2025-10-17T19:43:04.758668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.767449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.778818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.787991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.800750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.809873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.817633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.826001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.834581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.844285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.854210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.861074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.871903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.883134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.892245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.908459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.917061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.925408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.933981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.944303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.952403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.964985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.972309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:04.980660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:43:05.060739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51176","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:43:57 up  3:26,  0 user,  load average: 4.12, 3.49, 2.31
	Linux default-k8s-diff-port-112878 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7cdc338f648b8ccfe127c89d4264cccf946e8626662bd3d5e65c3c7cbd06c887] <==
	I1017 19:43:06.507259       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 19:43:06.507519       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 19:43:06.507764       1 main.go:148] setting mtu 1500 for CNI 
	I1017 19:43:06.507843       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 19:43:06.507890       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T19:43:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 19:43:06.805162       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 19:43:06.805209       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 19:43:06.805220       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:43:06.805345       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 19:43:07.006977       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:43:07.007810       1 metrics.go:72] Registering metrics
	I1017 19:43:07.007947       1 controller.go:711] "Syncing nftables rules"
	I1017 19:43:16.804866       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:43:16.804961       1 main.go:301] handling current node
	I1017 19:43:26.808766       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:43:26.808799       1 main.go:301] handling current node
	I1017 19:43:36.805010       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:43:36.805044       1 main.go:301] handling current node
	I1017 19:43:46.805816       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:43:46.805872       1 main.go:301] handling current node
	I1017 19:43:56.812765       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:43:56.812805       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4933ad97f18077b0cfa66f7e6cbc74867dea0c8b55ad78ca6dccc0cac1d91a49] <==
	I1017 19:43:05.628001       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1017 19:43:05.628821       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 19:43:05.629031       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 19:43:05.627578       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1017 19:43:05.629277       1 aggregator.go:171] initial CRD sync complete...
	I1017 19:43:05.629333       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 19:43:05.629340       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 19:43:05.629346       1 cache.go:39] Caches are synced for autoregister controller
	E1017 19:43:05.636753       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 19:43:05.637151       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 19:43:05.653324       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 19:43:05.658563       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 19:43:05.658609       1 policy_source.go:240] refreshing policies
	I1017 19:43:05.669970       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 19:43:05.986016       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 19:43:05.994550       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 19:43:06.067769       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 19:43:06.126841       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 19:43:06.153332       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 19:43:06.252215       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.168.81"}
	I1017 19:43:06.279258       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.26.243"}
	I1017 19:43:06.531077       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:43:09.373930       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 19:43:09.422209       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:43:09.524404       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [506f5ac682e5be1da5a5ba36fa52da915314fc50810783bf7bd35773b3730f41] <==
	I1017 19:43:08.968529       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 19:43:08.968560       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 19:43:08.968790       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 19:43:08.968926       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 19:43:08.969383       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 19:43:08.970643       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 19:43:08.972722       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1017 19:43:08.974945       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 19:43:08.974973       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 19:43:08.975011       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 19:43:08.975040       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 19:43:08.975049       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 19:43:08.975055       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 19:43:08.975039       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:43:08.976333       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 19:43:08.976354       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 19:43:08.978574       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 19:43:08.980803       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 19:43:08.983076       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 19:43:08.984746       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 19:43:08.987098       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 19:43:08.987111       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:43:08.992410       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:43:08.992439       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 19:43:08.992452       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [912bc52145327590baf7aab5df00ca36437b09ad6e97370f085bd5d74f82ddcd] <==
	I1017 19:43:06.410238       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:43:06.479016       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:43:06.579159       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:43:06.579193       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1017 19:43:06.579298       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:43:06.605559       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:43:06.605721       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:43:06.612154       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:43:06.612858       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:43:06.612909       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:43:06.615987       1 config.go:309] "Starting node config controller"
	I1017 19:43:06.616033       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:43:06.616042       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:43:06.616494       1 config.go:200] "Starting service config controller"
	I1017 19:43:06.616545       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:43:06.616505       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:43:06.616663       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:43:06.616832       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:43:06.617153       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:43:06.617901       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 19:43:06.716850       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:43:06.716866       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [901ddd13929fb6920b4881f0a5981c62221ac0353a5ea0b0595b97491426fe6d] <==
	I1017 19:43:04.308199       1 serving.go:386] Generated self-signed cert in-memory
	W1017 19:43:05.562902       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 19:43:05.562997       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 19:43:05.563011       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 19:43:05.563021       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 19:43:05.596109       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 19:43:05.596143       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:43:05.599466       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 19:43:05.599651       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:43:05.600973       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:43:05.599675       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 19:43:05.701577       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:43:09 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:09.593047     707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/faf76b76-7636-40eb-98aa-e9ef5eb101bc-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-hlrh4\" (UID: \"faf76b76-7636-40eb-98aa-e9ef5eb101bc\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hlrh4"
	Oct 17 19:43:09 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:09.593128     707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fcdd7205-ad74-4eb3-addd-cfcf1e35074e-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-k8cfw\" (UID: \"fcdd7205-ad74-4eb3-addd-cfcf1e35074e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw"
	Oct 17 19:43:09 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:09.593181     707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnt5m\" (UniqueName: \"kubernetes.io/projected/fcdd7205-ad74-4eb3-addd-cfcf1e35074e-kube-api-access-cnt5m\") pod \"dashboard-metrics-scraper-6ffb444bf9-k8cfw\" (UID: \"fcdd7205-ad74-4eb3-addd-cfcf1e35074e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw"
	Oct 17 19:43:09 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:09.593225     707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4qdl\" (UniqueName: \"kubernetes.io/projected/faf76b76-7636-40eb-98aa-e9ef5eb101bc-kube-api-access-r4qdl\") pod \"kubernetes-dashboard-855c9754f9-hlrh4\" (UID: \"faf76b76-7636-40eb-98aa-e9ef5eb101bc\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hlrh4"
	Oct 17 19:43:12 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:12.998523     707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw" podStartSLOduration=0.89919793 podStartE2EDuration="3.998498021s" podCreationTimestamp="2025-10-17 19:43:09 +0000 UTC" firstStartedPulling="2025-10-17 19:43:09.827955133 +0000 UTC m=+7.047445200" lastFinishedPulling="2025-10-17 19:43:12.92725522 +0000 UTC m=+10.146745291" observedRunningTime="2025-10-17 19:43:12.998139238 +0000 UTC m=+10.217629334" watchObservedRunningTime="2025-10-17 19:43:12.998498021 +0000 UTC m=+10.217988097"
	Oct 17 19:43:13 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:13.992356     707 scope.go:117] "RemoveContainer" containerID="6fa3bb54b9ac5d51983aa653d72d4e0026f4e5c7f8229a1585b5d6570f9572a7"
	Oct 17 19:43:15 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:15.000644     707 scope.go:117] "RemoveContainer" containerID="f4466c8b5196f1e504186902977f650b45480aea237793e6079bbe998f5419b6"
	Oct 17 19:43:15 default-k8s-diff-port-112878 kubelet[707]: E1017 19:43:15.000819     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k8cfw_kubernetes-dashboard(fcdd7205-ad74-4eb3-addd-cfcf1e35074e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw" podUID="fcdd7205-ad74-4eb3-addd-cfcf1e35074e"
	Oct 17 19:43:15 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:15.002784     707 scope.go:117] "RemoveContainer" containerID="6fa3bb54b9ac5d51983aa653d72d4e0026f4e5c7f8229a1585b5d6570f9572a7"
	Oct 17 19:43:16 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:16.007203     707 scope.go:117] "RemoveContainer" containerID="f4466c8b5196f1e504186902977f650b45480aea237793e6079bbe998f5419b6"
	Oct 17 19:43:16 default-k8s-diff-port-112878 kubelet[707]: E1017 19:43:16.007452     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k8cfw_kubernetes-dashboard(fcdd7205-ad74-4eb3-addd-cfcf1e35074e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw" podUID="fcdd7205-ad74-4eb3-addd-cfcf1e35074e"
	Oct 17 19:43:18 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:18.213560     707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hlrh4" podStartSLOduration=1.877752512 podStartE2EDuration="9.213533988s" podCreationTimestamp="2025-10-17 19:43:09 +0000 UTC" firstStartedPulling="2025-10-17 19:43:09.828000431 +0000 UTC m=+7.047490486" lastFinishedPulling="2025-10-17 19:43:17.163781889 +0000 UTC m=+14.383271962" observedRunningTime="2025-10-17 19:43:18.025591235 +0000 UTC m=+15.245081307" watchObservedRunningTime="2025-10-17 19:43:18.213533988 +0000 UTC m=+15.433024063"
	Oct 17 19:43:23 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:23.696389     707 scope.go:117] "RemoveContainer" containerID="f4466c8b5196f1e504186902977f650b45480aea237793e6079bbe998f5419b6"
	Oct 17 19:43:23 default-k8s-diff-port-112878 kubelet[707]: E1017 19:43:23.696638     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k8cfw_kubernetes-dashboard(fcdd7205-ad74-4eb3-addd-cfcf1e35074e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw" podUID="fcdd7205-ad74-4eb3-addd-cfcf1e35074e"
	Oct 17 19:43:36 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:36.901258     707 scope.go:117] "RemoveContainer" containerID="f4466c8b5196f1e504186902977f650b45480aea237793e6079bbe998f5419b6"
	Oct 17 19:43:37 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:37.066968     707 scope.go:117] "RemoveContainer" containerID="f4466c8b5196f1e504186902977f650b45480aea237793e6079bbe998f5419b6"
	Oct 17 19:43:37 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:37.067248     707 scope.go:117] "RemoveContainer" containerID="c497bea94d8c6edad854d6e668938a18cf8418cc85439cf5e1a69153d1e8609b"
	Oct 17 19:43:37 default-k8s-diff-port-112878 kubelet[707]: E1017 19:43:37.067486     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k8cfw_kubernetes-dashboard(fcdd7205-ad74-4eb3-addd-cfcf1e35074e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw" podUID="fcdd7205-ad74-4eb3-addd-cfcf1e35074e"
	Oct 17 19:43:37 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:37.069252     707 scope.go:117] "RemoveContainer" containerID="592cdb02b1e2a86a61705ebc3560af11df6ef568e4e5c623da98e983c8a1cc61"
	Oct 17 19:43:43 default-k8s-diff-port-112878 kubelet[707]: I1017 19:43:43.696816     707 scope.go:117] "RemoveContainer" containerID="c497bea94d8c6edad854d6e668938a18cf8418cc85439cf5e1a69153d1e8609b"
	Oct 17 19:43:43 default-k8s-diff-port-112878 kubelet[707]: E1017 19:43:43.697052     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k8cfw_kubernetes-dashboard(fcdd7205-ad74-4eb3-addd-cfcf1e35074e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k8cfw" podUID="fcdd7205-ad74-4eb3-addd-cfcf1e35074e"
	Oct 17 19:43:52 default-k8s-diff-port-112878 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 19:43:52 default-k8s-diff-port-112878 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 19:43:52 default-k8s-diff-port-112878 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 17 19:43:52 default-k8s-diff-port-112878 systemd[1]: kubelet.service: Consumed 1.714s CPU time.
	
	
	==> kubernetes-dashboard [77d4f52b9e17b76572890624c2496a017110f0f9e062d8447aa724c65828e7ac] <==
	2025/10/17 19:43:17 Using namespace: kubernetes-dashboard
	2025/10/17 19:43:17 Using in-cluster config to connect to apiserver
	2025/10/17 19:43:17 Using secret token for csrf signing
	2025/10/17 19:43:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 19:43:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 19:43:17 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 19:43:17 Generating JWE encryption key
	2025/10/17 19:43:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 19:43:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 19:43:17 Initializing JWE encryption key from synchronized object
	2025/10/17 19:43:17 Creating in-cluster Sidecar client
	2025/10/17 19:43:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 19:43:17 Serving insecurely on HTTP port: 9090
	2025/10/17 19:43:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 19:43:17 Starting overwatch
	
	
	==> storage-provisioner [2391fc7daf6f0c7ace1a6cb5b28f6b03f222c2106a49c1868f7146db0a965dd7] <==
	I1017 19:43:37.120018       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 19:43:37.128160       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 19:43:37.128203       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 19:43:37.130726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:43:40.586986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:43:44.849061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:43:48.448276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:43:51.503057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:43:54.526817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:43:54.531957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 19:43:54.532145       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 19:43:54.532347       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f6462b38-005f-4c92-8d22-eea640034e0b", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-112878_e5f4f184-b609-4561-a915-75d847001cc2 became leader
	I1017 19:43:54.532857       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-112878_e5f4f184-b609-4561-a915-75d847001cc2!
	W1017 19:43:54.537951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:43:54.543913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 19:43:54.634023       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-112878_e5f4f184-b609-4561-a915-75d847001cc2!
	W1017 19:43:56.547570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:43:56.553278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:43:58.557028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:43:58.593298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [592cdb02b1e2a86a61705ebc3560af11df6ef568e4e5c623da98e983c8a1cc61] <==
	I1017 19:43:06.359811       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 19:43:36.362159       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-112878 -n default-k8s-diff-port-112878
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-112878 -n default-k8s-diff-port-112878: exit status 2 (417.569092ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-112878 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (8.94s)
E1017 19:45:11.576651  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/old-k8s-version-907112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:45:11.583169  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/old-k8s-version-907112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:45:11.594723  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/old-k8s-version-907112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:45:11.616381  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/old-k8s-version-907112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:45:11.658616  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/old-k8s-version-907112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:45:11.740519  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/old-k8s-version-907112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:45:11.902550  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/old-k8s-version-907112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:45:12.224858  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/old-k8s-version-907112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:45:12.867277  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/old-k8s-version-907112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:45:14.149371  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/old-k8s-version-907112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (264/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.24
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 3.78
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.42
21 TestBinaryMirror 0.84
22 TestOffline 87.3
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 165.63
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 8.45
48 TestAddons/StoppedEnableDisable 18.56
49 TestCertOptions 25.26
50 TestCertExpiration 214.23
52 TestForceSystemdFlag 39.12
53 TestForceSystemdEnv 33.3
55 TestKVMDriverInstallOrUpdate 1.09
59 TestErrorSpam/setup 23.47
60 TestErrorSpam/start 0.68
61 TestErrorSpam/status 0.95
62 TestErrorSpam/pause 5.51
63 TestErrorSpam/unpause 5.81
64 TestErrorSpam/stop 2.63
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 42.83
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.42
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.07
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.15
76 TestFunctional/serial/CacheCmd/cache/add_local 1.18
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 41.47
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.3
87 TestFunctional/serial/LogsFileCmd 1.32
88 TestFunctional/serial/InvalidService 3.98
90 TestFunctional/parallel/ConfigCmd 0.37
91 TestFunctional/parallel/DashboardCmd 7.7
92 TestFunctional/parallel/DryRun 0.39
93 TestFunctional/parallel/InternationalLanguage 0.17
94 TestFunctional/parallel/StatusCmd 0.99
99 TestFunctional/parallel/AddonsCmd 0.13
100 TestFunctional/parallel/PersistentVolumeClaim 28.3
102 TestFunctional/parallel/SSHCmd 0.55
103 TestFunctional/parallel/CpCmd 1.7
104 TestFunctional/parallel/MySQL 15.96
105 TestFunctional/parallel/FileSync 0.28
106 TestFunctional/parallel/CertSync 1.68
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
114 TestFunctional/parallel/License 0.46
116 TestFunctional/parallel/Version/short 0.05
117 TestFunctional/parallel/Version/components 0.5
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
122 TestFunctional/parallel/ImageCommands/ImageBuild 2.25
123 TestFunctional/parallel/ImageCommands/Setup 1
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
131 TestFunctional/parallel/ImageCommands/ImageRemove 2.09
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
135 TestFunctional/parallel/ProfileCmd/profile_list 0.42
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
137 TestFunctional/parallel/MountCmd/any-port 5.97
138 TestFunctional/parallel/MountCmd/specific-port 1.84
140 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.42
141 TestFunctional/parallel/MountCmd/VerifyCleanup 1.54
142 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.28
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
151 TestFunctional/parallel/ServiceCmd/List 1.71
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.71
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 121.92
164 TestMultiControlPlane/serial/DeployApp 4.3
165 TestMultiControlPlane/serial/PingHostFromPods 1.03
166 TestMultiControlPlane/serial/AddWorkerNode 24.65
167 TestMultiControlPlane/serial/NodeLabels 0.07
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.91
169 TestMultiControlPlane/serial/CopyFile 17.76
170 TestMultiControlPlane/serial/StopSecondaryNode 13.75
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.75
172 TestMultiControlPlane/serial/RestartSecondaryNode 14.62
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.93
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 107.04
175 TestMultiControlPlane/serial/DeleteSecondaryNode 10.67
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.71
177 TestMultiControlPlane/serial/StopCluster 41.81
178 TestMultiControlPlane/serial/RestartCluster 56.84
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.71
180 TestMultiControlPlane/serial/AddSecondaryNode 50.62
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.92
185 TestJSONOutput/start/Command 40.09
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.07
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.22
210 TestKicCustomNetwork/create_custom_network 27.5
211 TestKicCustomNetwork/use_default_bridge_network 23.98
212 TestKicExistingNetwork 27.77
213 TestKicCustomSubnet 23.97
214 TestKicStaticIP 25.49
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 50.03
219 TestMountStart/serial/StartWithMountFirst 6.16
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 5.74
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.73
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.26
226 TestMountStart/serial/RestartStopped 7.1
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 68.97
231 TestMultiNode/serial/DeployApp2Nodes 3.38
232 TestMultiNode/serial/PingHostFrom2Pods 0.71
233 TestMultiNode/serial/AddNode 24.64
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.66
236 TestMultiNode/serial/CopyFile 9.79
237 TestMultiNode/serial/StopNode 2.27
238 TestMultiNode/serial/StartAfterStop 7.35
239 TestMultiNode/serial/RestartKeepsNodes 78.31
240 TestMultiNode/serial/DeleteNode 5.36
241 TestMultiNode/serial/StopMultiNode 28.61
242 TestMultiNode/serial/RestartMultiNode 51.64
243 TestMultiNode/serial/ValidateNameConflict 26.53
248 TestPreload 105.17
250 TestScheduledStopUnix 98.44
253 TestInsufficientStorage 9.83
254 TestRunningBinaryUpgrade 47.22
256 TestKubernetesUpgrade 301.61
257 TestMissingContainerUpgrade 79.33
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
260 TestNoKubernetes/serial/StartWithK8s 32.17
261 TestNoKubernetes/serial/StartWithStopK8s 28.26
262 TestNoKubernetes/serial/Start 5.14
263 TestStoppedBinaryUpgrade/Setup 0.51
264 TestStoppedBinaryUpgrade/Upgrade 62.1
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
266 TestNoKubernetes/serial/ProfileList 1.78
267 TestNoKubernetes/serial/Stop 4.55
268 TestNoKubernetes/serial/StartNoArgs 8.93
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
277 TestStoppedBinaryUpgrade/MinikubeLogs 1.02
279 TestPause/serial/Start 38.65
287 TestNetworkPlugins/group/false 3.54
292 TestStartStop/group/old-k8s-version/serial/FirstStart 49.04
293 TestPause/serial/SecondStartNoReconfiguration 6.2
296 TestStartStop/group/no-preload/serial/FirstStart 52.16
297 TestStartStop/group/old-k8s-version/serial/DeployApp 8.29
300 TestStartStop/group/embed-certs/serial/FirstStart 41.2
301 TestStartStop/group/old-k8s-version/serial/Stop 16.06
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
303 TestStartStop/group/old-k8s-version/serial/SecondStart 44.5
304 TestStartStop/group/no-preload/serial/DeployApp 8.3
306 TestStartStop/group/no-preload/serial/Stop 16.65
307 TestStartStop/group/embed-certs/serial/DeployApp 8.3
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
309 TestStartStop/group/no-preload/serial/SecondStart 45.71
311 TestStartStop/group/embed-certs/serial/Stop 16.55
312 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
313 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
314 TestStartStop/group/embed-certs/serial/SecondStart 52.56
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.2
320 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
321 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
322 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
325 TestStartStop/group/newest-cni/serial/FirstStart 29.68
326 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.29
328 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
329 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
332 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.47
333 TestNetworkPlugins/group/auto/Start 43.33
334 TestNetworkPlugins/group/enable-default-cni/Start 62.59
335 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/Stop 18.07
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
339 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 44.8
340 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.31
341 TestStartStop/group/newest-cni/serial/SecondStart 12.44
342 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
346 TestNetworkPlugins/group/auto/KubeletFlags 0.32
347 TestNetworkPlugins/group/auto/NetCatPod 8.22
348 TestNetworkPlugins/group/flannel/Start 53.96
349 TestNetworkPlugins/group/auto/DNS 0.12
350 TestNetworkPlugins/group/auto/Localhost 0.1
351 TestNetworkPlugins/group/auto/HairPin 0.11
352 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
353 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
354 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.23
355 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
356 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
358 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
359 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
360 TestNetworkPlugins/group/calico/Start 51.72
361 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
362 TestNetworkPlugins/group/bridge/Start 37.08
363 TestNetworkPlugins/group/custom-flannel/Start 54.67
364 TestNetworkPlugins/group/flannel/ControllerPod 6.01
365 TestNetworkPlugins/group/flannel/KubeletFlags 0.41
366 TestNetworkPlugins/group/flannel/NetCatPod 9.61
367 TestNetworkPlugins/group/flannel/DNS 0.15
368 TestNetworkPlugins/group/flannel/Localhost 0.15
369 TestNetworkPlugins/group/flannel/HairPin 0.13
370 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
371 TestNetworkPlugins/group/bridge/NetCatPod 8.24
372 TestNetworkPlugins/group/calico/ControllerPod 6.01
373 TestNetworkPlugins/group/bridge/DNS 0.16
374 TestNetworkPlugins/group/bridge/Localhost 0.13
375 TestNetworkPlugins/group/bridge/HairPin 0.12
376 TestNetworkPlugins/group/calico/KubeletFlags 0.33
377 TestNetworkPlugins/group/calico/NetCatPod 9.2
378 TestNetworkPlugins/group/kindnet/Start 41.7
379 TestNetworkPlugins/group/calico/DNS 0.15
380 TestNetworkPlugins/group/calico/Localhost 0.1
381 TestNetworkPlugins/group/calico/HairPin 0.11
382 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
383 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.29
384 TestNetworkPlugins/group/custom-flannel/DNS 0.13
385 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
386 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
387 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
388 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
389 TestNetworkPlugins/group/kindnet/NetCatPod 8.18
390 TestNetworkPlugins/group/kindnet/DNS 0.12
391 TestNetworkPlugins/group/kindnet/Localhost 0.09
392 TestNetworkPlugins/group/kindnet/HairPin 0.09
x
+
TestDownloadOnly/v1.28.0/json-events (4.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-116436 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-116436 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.242985572s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1017 18:56:17.173600  495725 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1017 18:56:17.173759  495725 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-116436
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-116436: exit status 85 (72.20612ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-116436 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-116436 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 18:56:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 18:56:12.976010  495737 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:56:12.976268  495737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:12.976277  495737 out.go:374] Setting ErrFile to fd 2...
	I1017 18:56:12.976281  495737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:12.976497  495737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	W1017 18:56:12.976639  495737 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21753-492109/.minikube/config/config.json: open /home/jenkins/minikube-integration/21753-492109/.minikube/config/config.json: no such file or directory
	I1017 18:56:12.977176  495737 out.go:368] Setting JSON to true
	I1017 18:56:12.978178  495737 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9512,"bootTime":1760717861,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 18:56:12.978303  495737 start.go:141] virtualization: kvm guest
	I1017 18:56:12.980596  495737 out.go:99] [download-only-116436] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1017 18:56:12.980738  495737 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball: no such file or directory
	I1017 18:56:12.980795  495737 notify.go:220] Checking for updates...
	I1017 18:56:12.982160  495737 out.go:171] MINIKUBE_LOCATION=21753
	I1017 18:56:12.983640  495737 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 18:56:12.985185  495737 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 18:56:12.986726  495737 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 18:56:12.988272  495737 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1017 18:56:12.990775  495737 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1017 18:56:12.991173  495737 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 18:56:13.015141  495737 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 18:56:13.015240  495737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 18:56:13.076035  495737 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-17 18:56:13.065532834 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 18:56:13.076166  495737 docker.go:318] overlay module found
	I1017 18:56:13.077888  495737 out.go:99] Using the docker driver based on user configuration
	I1017 18:56:13.077921  495737 start.go:305] selected driver: docker
	I1017 18:56:13.077928  495737 start.go:925] validating driver "docker" against <nil>
	I1017 18:56:13.078038  495737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 18:56:13.137637  495737 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-17 18:56:13.12710673 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 18:56:13.137890  495737 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 18:56:13.138435  495737 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1017 18:56:13.138610  495737 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1017 18:56:13.140677  495737 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-116436 host does not exist
	  To start a cluster, run: "minikube start -p download-only-116436"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-116436
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-808492 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-808492 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.780836745s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1017 18:56:21.409192  495725 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1017 18:56:21.409252  495725 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-492109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-808492
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-808492: exit status 85 (68.243767ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-116436 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-116436 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-116436                                                                                                                                                   │ download-only-116436 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ start   │ -o=json --download-only -p download-only-808492 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-808492 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 18:56:17
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 18:56:17.673854  496074 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:56:17.673973  496074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:17.673981  496074 out.go:374] Setting ErrFile to fd 2...
	I1017 18:56:17.673987  496074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:17.674266  496074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 18:56:17.674829  496074 out.go:368] Setting JSON to true
	I1017 18:56:17.675843  496074 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9517,"bootTime":1760717861,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 18:56:17.675955  496074 start.go:141] virtualization: kvm guest
	I1017 18:56:17.678024  496074 out.go:99] [download-only-808492] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 18:56:17.678245  496074 notify.go:220] Checking for updates...
	I1017 18:56:17.679677  496074 out.go:171] MINIKUBE_LOCATION=21753
	I1017 18:56:17.681157  496074 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 18:56:17.682586  496074 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 18:56:17.683918  496074 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 18:56:17.688927  496074 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1017 18:56:17.691664  496074 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1017 18:56:17.691981  496074 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 18:56:17.716947  496074 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 18:56:17.717040  496074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 18:56:17.779098  496074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-17 18:56:17.767437788 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 18:56:17.779207  496074 docker.go:318] overlay module found
	I1017 18:56:17.780812  496074 out.go:99] Using the docker driver based on user configuration
	I1017 18:56:17.780854  496074 start.go:305] selected driver: docker
	I1017 18:56:17.780860  496074 start.go:925] validating driver "docker" against <nil>
	I1017 18:56:17.780945  496074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 18:56:17.841531  496074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-17 18:56:17.831019286 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 18:56:17.841728  496074 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 18:56:17.842227  496074 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1017 18:56:17.842378  496074 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1017 18:56:17.844100  496074 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-808492 host does not exist
	  To start a cluster, run: "minikube start -p download-only-808492"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-808492
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-352708 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-352708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-352708
--- PASS: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestBinaryMirror (0.84s)

                                                
                                                
=== RUN   TestBinaryMirror
I1017 18:56:22.558075  495725 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-386230 --alsologtostderr --binary-mirror http://127.0.0.1:41417 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-386230" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-386230
--- PASS: TestBinaryMirror (0.84s)

                                                
                                    
x
+
TestOffline (87.3s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-539039 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-539039 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m22.044853876s)
helpers_test.go:175: Cleaning up "offline-crio-539039" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-539039
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-539039: (5.2531239s)
--- PASS: TestOffline (87.30s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-642189
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-642189: exit status 85 (60.99378ms)

                                                
                                                
-- stdout --
	* Profile "addons-642189" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-642189"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-642189
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-642189: exit status 85 (59.92004ms)

                                                
                                                
-- stdout --
	* Profile "addons-642189" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-642189"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (165.63s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-642189 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-642189 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m45.634458397s)
--- PASS: TestAddons/Setup (165.63s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-642189 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-642189 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.45s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-642189 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-642189 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [11e3a0a3-e413-4307-8a33-7461887a2188] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [11e3a0a3-e413-4307-8a33-7461887a2188] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003688825s
addons_test.go:694: (dbg) Run:  kubectl --context addons-642189 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-642189 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-642189 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.56s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-642189
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-642189: (18.28046448s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-642189
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-642189
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-642189
--- PASS: TestAddons/StoppedEnableDisable (18.56s)

                                                
                                    
x
+
TestCertOptions (25.26s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-983279 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-983279 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (21.986779453s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-983279 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-983279 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-983279 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-983279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-983279
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-983279: (2.537026491s)
--- PASS: TestCertOptions (25.26s)

                                                
                                    
x
+
TestCertExpiration (214.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-141205 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-141205 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (24.213030223s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-141205 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-141205 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (7.436064006s)
helpers_test.go:175: Cleaning up "cert-expiration-141205" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-141205
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-141205: (2.584362978s)
--- PASS: TestCertExpiration (214.23s)

                                                
                                    
x
+
TestForceSystemdFlag (39.12s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-788108 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-788108 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.939545411s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-788108 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-788108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-788108
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-788108: (4.840849661s)
--- PASS: TestForceSystemdFlag (39.12s)

                                                
                                    
x
+
TestForceSystemdEnv (33.3s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-607506 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-607506 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (30.518922056s)
helpers_test.go:175: Cleaning up "force-systemd-env-607506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-607506
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-607506: (2.780627053s)
--- PASS: TestForceSystemdEnv (33.30s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.09s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1017 19:39:21.431165  495725 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1017 19:39:21.431300  495725 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate804648542/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1017 19:39:21.462804  495725 install.go:163] /tmp/TestKVMDriverInstallOrUpdate804648542/001/docker-machine-driver-kvm2 version is 1.1.1
W1017 19:39:21.462847  495725 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1017 19:39:21.462969  495725 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1017 19:39:21.463017  495725 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate804648542/001/docker-machine-driver-kvm2
I1017 19:39:22.376555  495725 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate804648542/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1017 19:39:22.392171  495725 install.go:163] /tmp/TestKVMDriverInstallOrUpdate804648542/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (1.09s)

                                                
                                    
x
+
TestErrorSpam/setup (23.47s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-216991 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-216991 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-216991 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-216991 --driver=docker  --container-runtime=crio: (23.465459982s)
--- PASS: TestErrorSpam/setup (23.47s)

                                                
                                    
x
+
TestErrorSpam/start (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 start --dry-run
--- PASS: TestErrorSpam/start (0.68s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (5.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 pause: exit status 80 (2.20125569s)

                                                
                                                
-- stdout --
	* Pausing node nospam-216991 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:02:52Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 pause: exit status 80 (1.856594401s)

                                                
                                                
-- stdout --
	* Pausing node nospam-216991 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:02:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 pause: exit status 80 (1.456058575s)

                                                
                                                
-- stdout --
	* Pausing node nospam-216991 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:02:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 unpause: exit status 80 (1.911410836s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-216991 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:02:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 unpause: exit status 80 (2.217874461s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-216991 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:03:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 unpause: exit status 80 (1.677073107s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-216991 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:03:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.81s)

                                                
                                    
x
+
TestErrorSpam/stop (2.63s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 stop: (2.432479462s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216991 --log_dir /tmp/nospam-216991 stop
--- PASS: TestErrorSpam/stop (2.63s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21753-492109/.minikube/files/etc/test/nested/copy/495725/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (42.83s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-397448 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-397448 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (42.826382848s)
--- PASS: TestFunctional/serial/StartWithProxy (42.83s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.42s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1017 19:03:51.937975  495725 config.go:182] Loaded profile config "functional-397448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-397448 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-397448 --alsologtostderr -v=8: (6.421833972s)
functional_test.go:678: soft start took 6.422606079s for "functional-397448" cluster.
I1017 19:03:58.360194  495725 config.go:182] Loaded profile config "functional-397448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.42s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-397448 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-397448 cache add registry.k8s.io/pause:3.1: (1.036788177s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-397448 cache add registry.k8s.io/pause:3.3: (1.062209294s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-397448 cache add registry.k8s.io/pause:latest: (1.049588753s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-397448 /tmp/TestFunctionalserialCacheCmdcacheadd_local2559526980/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 cache add minikube-local-cache-test:functional-397448
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 cache delete minikube-local-cache-test:functional-397448
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-397448
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397448 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.43704ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 kubectl -- --context functional-397448 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-397448 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.47s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-397448 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1017 19:04:09.662565  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:09.669001  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:09.680460  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:09.701892  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:09.743479  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:09.824917  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:09.986499  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:10.308315  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:10.950400  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:12.232054  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:14.794824  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:19.916550  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:30.158871  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-397448 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.466962708s)
functional_test.go:776: restart took 41.467103599s for "functional-397448" cluster.
I1017 19:04:46.688209  495725 config.go:182] Loaded profile config "functional-397448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (41.47s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-397448 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-397448 logs: (1.302171793s)
--- PASS: TestFunctional/serial/LogsCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 logs --file /tmp/TestFunctionalserialLogsFileCmd3754472570/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-397448 logs --file /tmp/TestFunctionalserialLogsFileCmd3754472570/001/logs.txt: (1.31656469s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.98s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-397448 apply -f testdata/invalidsvc.yaml
E1017 19:04:50.640938  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-397448
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-397448: exit status 115 (354.857327ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32686 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-397448 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397448 config get cpus: exit status 14 (74.422972ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397448 config get cpus: exit status 14 (56.61203ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-397448 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-397448 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 535342: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.70s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-397448 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-397448 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (164.408946ms)

                                                
                                                
-- stdout --
	* [functional-397448] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:05:32.232076  535542 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:05:32.232182  535542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:05:32.232186  535542 out.go:374] Setting ErrFile to fd 2...
	I1017 19:05:32.232190  535542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:05:32.232377  535542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:05:32.232985  535542 out.go:368] Setting JSON to false
	I1017 19:05:32.234172  535542 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10071,"bootTime":1760717861,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:05:32.234309  535542 start.go:141] virtualization: kvm guest
	I1017 19:05:32.236632  535542 out.go:179] * [functional-397448] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:05:32.238745  535542 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:05:32.238737  535542 notify.go:220] Checking for updates...
	I1017 19:05:32.240290  535542 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:05:32.241712  535542 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:05:32.243111  535542 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 19:05:32.244276  535542 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:05:32.245552  535542 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:05:32.247484  535542 config.go:182] Loaded profile config "functional-397448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:05:32.248212  535542 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:05:32.273566  535542 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:05:32.273648  535542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:05:32.334381  535542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-17 19:05:32.323749399 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:05:32.334524  535542 docker.go:318] overlay module found
	I1017 19:05:32.337463  535542 out.go:179] * Using the docker driver based on existing profile
	I1017 19:05:32.339024  535542 start.go:305] selected driver: docker
	I1017 19:05:32.339044  535542 start.go:925] validating driver "docker" against &{Name:functional-397448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-397448 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:05:32.339148  535542 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:05:32.341212  535542 out.go:203] 
	W1017 19:05:32.342863  535542 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1017 19:05:32.344219  535542 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-397448 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-397448 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-397448 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (170.679021ms)

                                                
                                                
-- stdout --
	* [functional-397448] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:05:13.790542  531458 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:05:13.790846  531458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:05:13.790857  531458 out.go:374] Setting ErrFile to fd 2...
	I1017 19:05:13.790864  531458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:05:13.791199  531458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:05:13.791762  531458 out.go:368] Setting JSON to false
	I1017 19:05:13.793058  531458 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10053,"bootTime":1760717861,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:05:13.793180  531458 start.go:141] virtualization: kvm guest
	I1017 19:05:13.795564  531458 out.go:179] * [functional-397448] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1017 19:05:13.797215  531458 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:05:13.797221  531458 notify.go:220] Checking for updates...
	I1017 19:05:13.798716  531458 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:05:13.800470  531458 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:05:13.802109  531458 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 19:05:13.803409  531458 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:05:13.804713  531458 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:05:13.806545  531458 config.go:182] Loaded profile config "functional-397448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:05:13.807310  531458 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:05:13.832666  531458 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:05:13.832841  531458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:05:13.893897  531458 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-17 19:05:13.88132721 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:05:13.894041  531458 docker.go:318] overlay module found
	I1017 19:05:13.897699  531458 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1017 19:05:13.899019  531458 start.go:305] selected driver: docker
	I1017 19:05:13.899041  531458 start.go:925] validating driver "docker" against &{Name:functional-397448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-397448 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:05:13.899138  531458 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:05:13.901040  531458 out.go:203] 
	W1017 19:05:13.902626  531458 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1017 19:05:13.904236  531458 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [51c85834-24ac-47df-99b9-8d0819ab7c0f] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005099931s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-397448 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-397448 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-397448 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-397448 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [5966b11e-32fb-4a31-8f77-a7e097b36d6f] Pending
helpers_test.go:352: "sp-pod" [5966b11e-32fb-4a31-8f77-a7e097b36d6f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [5966b11e-32fb-4a31-8f77-a7e097b36d6f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004023044s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-397448 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-397448 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-397448 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [c3566fd2-92ee-40e2-b803-0ae98ab60086] Pending
helpers_test.go:352: "sp-pod" [c3566fd2-92ee-40e2-b803-0ae98ab60086] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [c3566fd2-92ee-40e2-b803-0ae98ab60086] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004129075s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-397448 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.30s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh "cat /etc/hostname"
I1017 19:05:13.566435  495725 detect.go:223] nested VM detected
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh -n functional-397448 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 cp functional-397448:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2326261946/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh -n functional-397448 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh -n functional-397448 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (15.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-397448 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-xhhl6" [5fad499a-fdb3-4edc-b7db-a36bcd99c51a] Pending
helpers_test.go:352: "mysql-5bb876957f-xhhl6" [5fad499a-fdb3-4edc-b7db-a36bcd99c51a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-xhhl6" [5fad499a-fdb3-4edc-b7db-a36bcd99c51a] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 13.003690057s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-397448 exec mysql-5bb876957f-xhhl6 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-397448 exec mysql-5bb876957f-xhhl6 -- mysql -ppassword -e "show databases;": exit status 1 (96.925438ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1017 19:05:10.501001  495725 retry.go:31] will retry after 747.152161ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-397448 exec mysql-5bb876957f-xhhl6 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-397448 exec mysql-5bb876957f-xhhl6 -- mysql -ppassword -e "show databases;": exit status 1 (91.634127ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1017 19:05:11.341029  495725 retry.go:31] will retry after 1.745132584s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-397448 exec mysql-5bb876957f-xhhl6 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (15.96s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/495725/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh "sudo cat /etc/test/nested/copy/495725/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/495725.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh "sudo cat /etc/ssl/certs/495725.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/495725.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh "sudo cat /usr/share/ca-certificates/495725.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4957252.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh "sudo cat /etc/ssl/certs/4957252.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4957252.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh "sudo cat /usr/share/ca-certificates/4957252.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-397448 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397448 ssh "sudo systemctl is-active docker": exit status 1 (291.877002ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397448 ssh "sudo systemctl is-active containerd": exit status 1 (296.007303ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-397448 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-397448 image ls --format short --alsologtostderr:
I1017 19:05:33.583992  536195 out.go:360] Setting OutFile to fd 1 ...
I1017 19:05:33.584321  536195 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:05:33.584333  536195 out.go:374] Setting ErrFile to fd 2...
I1017 19:05:33.584340  536195 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:05:33.584655  536195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
I1017 19:05:33.585548  536195 config.go:182] Loaded profile config "functional-397448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:05:33.585704  536195 config.go:182] Loaded profile config "functional-397448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:05:33.586262  536195 cli_runner.go:164] Run: docker container inspect functional-397448 --format={{.State.Status}}
I1017 19:05:33.606658  536195 ssh_runner.go:195] Run: systemctl --version
I1017 19:05:33.606745  536195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-397448
I1017 19:05:33.628296  536195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/functional-397448/id_rsa Username:docker}
I1017 19:05:33.727039  536195 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-397448 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ latest             │ 07ccdb7838758 │ 164MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/library/nginx                 │ alpine             │ 5e7abcdd20216 │ 54.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-397448 image ls --format table --alsologtostderr:
I1017 19:05:34.080736  536483 out.go:360] Setting OutFile to fd 1 ...
I1017 19:05:34.080844  536483 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:05:34.080851  536483 out.go:374] Setting ErrFile to fd 2...
I1017 19:05:34.080855  536483 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:05:34.081075  536483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
I1017 19:05:34.081740  536483 config.go:182] Loaded profile config "functional-397448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:05:34.081852  536483 config.go:182] Loaded profile config "functional-397448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:05:34.082245  536483 cli_runner.go:164] Run: docker container inspect functional-397448 --format={{.State.Status}}
I1017 19:05:34.101813  536483 ssh_runner.go:195] Run: systemctl --version
I1017 19:05:34.101881  536483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-397448
I1017 19:05:34.121661  536483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/functional-397448/id_rsa Username:docker}
I1017 19:05:34.221118  536483 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-397448 image ls --format json --alsologtostderr:
[{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"r
epoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/
kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","
repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5","repoDigests":["docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22","docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54168570"},{"id":"07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115","docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"163615579"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b
0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a87
0d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1
.12.1"],"size":"76103547"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-397448 image ls --format json --alsologtostderr:
I1017 19:05:33.849252  536323 out.go:360] Setting OutFile to fd 1 ...
I1017 19:05:33.849506  536323 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:05:33.849515  536323 out.go:374] Setting ErrFile to fd 2...
I1017 19:05:33.849519  536323 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:05:33.849785  536323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
I1017 19:05:33.850434  536323 config.go:182] Loaded profile config "functional-397448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:05:33.850524  536323 config.go:182] Loaded profile config "functional-397448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:05:33.850995  536323 cli_runner.go:164] Run: docker container inspect functional-397448 --format={{.State.Status}}
I1017 19:05:33.873711  536323 ssh_runner.go:195] Run: systemctl --version
I1017 19:05:33.873794  536323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-397448
I1017 19:05:33.895225  536323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/functional-397448/id_rsa Username:docker}
I1017 19:05:33.994601  536323 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-397448 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "163615579"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5
repoDigests:
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
- docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e
repoTags:
- docker.io/library/nginx:alpine
size: "54168570"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-397448 image ls --format yaml --alsologtostderr:
I1017 19:05:33.620307  536214 out.go:360] Setting OutFile to fd 1 ...
I1017 19:05:33.620665  536214 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:05:33.620703  536214 out.go:374] Setting ErrFile to fd 2...
I1017 19:05:33.620711  536214 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:05:33.620947  536214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
I1017 19:05:33.621650  536214 config.go:182] Loaded profile config "functional-397448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:05:33.621787  536214 config.go:182] Loaded profile config "functional-397448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:05:33.622214  536214 cli_runner.go:164] Run: docker container inspect functional-397448 --format={{.State.Status}}
I1017 19:05:33.642599  536214 ssh_runner.go:195] Run: systemctl --version
I1017 19:05:33.642667  536214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-397448
I1017 19:05:33.662016  536214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/functional-397448/id_rsa Username:docker}
I1017 19:05:33.759419  536214 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397448 ssh pgrep buildkitd: exit status 1 (288.10873ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 image build -t localhost/my-image:functional-397448 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-397448 image build -t localhost/my-image:functional-397448 testdata/build --alsologtostderr: (1.741497684s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-397448 image build -t localhost/my-image:functional-397448 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 242fbc74019
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-397448
--> 924b0c6d83d
Successfully tagged localhost/my-image:functional-397448
924b0c6d83d0e96375a6d7f21c4be9fbae4dd2b15ce75898bf15f3b5aede6118
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-397448 image build -t localhost/my-image:functional-397448 testdata/build --alsologtostderr:
I1017 19:05:34.106181  536493 out.go:360] Setting OutFile to fd 1 ...
I1017 19:05:34.106489  536493 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:05:34.106502  536493 out.go:374] Setting ErrFile to fd 2...
I1017 19:05:34.106506  536493 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:05:34.106806  536493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
I1017 19:05:34.107501  536493 config.go:182] Loaded profile config "functional-397448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:05:34.108156  536493 config.go:182] Loaded profile config "functional-397448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:05:34.108621  536493 cli_runner.go:164] Run: docker container inspect functional-397448 --format={{.State.Status}}
I1017 19:05:34.129432  536493 ssh_runner.go:195] Run: systemctl --version
I1017 19:05:34.129483  536493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-397448
I1017 19:05:34.148428  536493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/functional-397448/id_rsa Username:docker}
I1017 19:05:34.245696  536493 build_images.go:161] Building image from path: /tmp/build.1269172174.tar
I1017 19:05:34.245775  536493 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1017 19:05:34.256593  536493 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1269172174.tar
I1017 19:05:34.260921  536493 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1269172174.tar: stat -c "%s %y" /var/lib/minikube/build/build.1269172174.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1269172174.tar': No such file or directory
I1017 19:05:34.260962  536493 ssh_runner.go:362] scp /tmp/build.1269172174.tar --> /var/lib/minikube/build/build.1269172174.tar (3072 bytes)
I1017 19:05:34.280425  536493 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1269172174
I1017 19:05:34.289253  536493 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1269172174 -xf /var/lib/minikube/build/build.1269172174.tar
I1017 19:05:34.298125  536493 crio.go:315] Building image: /var/lib/minikube/build/build.1269172174
I1017 19:05:34.298216  536493 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-397448 /var/lib/minikube/build/build.1269172174 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1017 19:05:35.769910  536493 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-397448 /var/lib/minikube/build/build.1269172174 --cgroup-manager=cgroupfs: (1.471649823s)
I1017 19:05:35.769997  536493 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1269172174
I1017 19:05:35.779075  536493 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1269172174.tar
I1017 19:05:35.787338  536493 build_images.go:217] Built localhost/my-image:functional-397448 from /tmp/build.1269172174.tar
I1017 19:05:35.787381  536493 build_images.go:133] succeeded building to: functional-397448
I1017 19:05:35.787387  536493 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 image ls
E1017 19:06:53.524531  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:09:09.662175  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:09:37.366785  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:14:09.662671  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-397448
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 image rm kicbase/echo-server:functional-397448 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 image ls
I1017 19:04:59.928867  495725 detect.go:223] nested VM detected
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-397448 image ls: (1.721895066s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "360.531552ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "56.331952ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "340.071888ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "55.758386ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-397448 /tmp/TestFunctionalparallelMountCmdany-port3128788866/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760727915136579135" to /tmp/TestFunctionalparallelMountCmdany-port3128788866/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760727915136579135" to /tmp/TestFunctionalparallelMountCmdany-port3128788866/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760727915136579135" to /tmp/TestFunctionalparallelMountCmdany-port3128788866/001/test-1760727915136579135
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397448 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (285.51454ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1017 19:05:15.422416  495725 retry.go:31] will retry after 703.119324ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 17 19:05 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 17 19:05 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 17 19:05 test-1760727915136579135
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh cat /mount-9p/test-1760727915136579135
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-397448 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [44c321d8-cfce-4f43-815b-c0f7bf7a9e6f] Pending
helpers_test.go:352: "busybox-mount" [44c321d8-cfce-4f43-815b-c0f7bf7a9e6f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [44c321d8-cfce-4f43-815b-c0f7bf7a9e6f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [44c321d8-cfce-4f43-815b-c0f7bf7a9e6f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003621532s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-397448 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-397448 /tmp/TestFunctionalparallelMountCmdany-port3128788866/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-397448 /tmp/TestFunctionalparallelMountCmdspecific-port1013559271/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397448 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (287.353054ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1017 19:05:21.398097  495725 retry.go:31] will retry after 459.067874ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-397448 /tmp/TestFunctionalparallelMountCmdspecific-port1013559271/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397448 ssh "sudo umount -f /mount-9p": exit status 1 (304.730993ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-397448 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-397448 /tmp/TestFunctionalparallelMountCmdspecific-port1013559271/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-397448 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-397448 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-397448 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-397448 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 534065: os: process already finished
helpers_test.go:519: unable to terminate pid 533848: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-397448 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1406038563/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-397448 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1406038563/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-397448 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1406038563/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397448 ssh "findmnt -T" /mount1: exit status 1 (366.416766ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1017 19:05:23.315255  495725 retry.go:31] will retry after 291.05954ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-397448 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-397448 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1406038563/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-397448 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1406038563/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-397448 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1406038563/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-397448 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-397448 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [d3d33b74-01b9-48c9-8b15-93d02b8c78ec] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [d3d33b74-01b9-48c9-8b15-93d02b8c78ec] Running
E1017 19:05:31.603038  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
2025/10/17 19:05:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004170498s
I1017 19:05:33.353400  495725 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-397448 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.153.78 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-397448 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-397448 service list: (1.705988407s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-397448 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-397448 service list -o json: (1.705578368s)
functional_test.go:1504: Took "1.705691365s" to run "out/minikube-linux-amd64 -p functional-397448 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-397448
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-397448
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-397448
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (121.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-235879 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m1.173425506s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (121.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-235879 kubectl -- rollout status deployment/busybox: (2.307736572s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 kubectl -- exec busybox-7b57f96db7-cs2qn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 kubectl -- exec busybox-7b57f96db7-l4b9w -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 kubectl -- exec busybox-7b57f96db7-q5xxw -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 kubectl -- exec busybox-7b57f96db7-cs2qn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 kubectl -- exec busybox-7b57f96db7-l4b9w -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 kubectl -- exec busybox-7b57f96db7-q5xxw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 kubectl -- exec busybox-7b57f96db7-cs2qn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 kubectl -- exec busybox-7b57f96db7-l4b9w -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 kubectl -- exec busybox-7b57f96db7-q5xxw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 kubectl -- exec busybox-7b57f96db7-cs2qn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 kubectl -- exec busybox-7b57f96db7-cs2qn -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 kubectl -- exec busybox-7b57f96db7-l4b9w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 kubectl -- exec busybox-7b57f96db7-l4b9w -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 kubectl -- exec busybox-7b57f96db7-q5xxw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 kubectl -- exec busybox-7b57f96db7-q5xxw -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-235879 node add --alsologtostderr -v 5: (23.736651853s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-235879 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 cp testdata/cp-test.txt ha-235879:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 cp ha-235879:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3820146076/001/cp-test_ha-235879.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 cp ha-235879:/home/docker/cp-test.txt ha-235879-m02:/home/docker/cp-test_ha-235879_ha-235879-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m02 "sudo cat /home/docker/cp-test_ha-235879_ha-235879-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 cp ha-235879:/home/docker/cp-test.txt ha-235879-m03:/home/docker/cp-test_ha-235879_ha-235879-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m03 "sudo cat /home/docker/cp-test_ha-235879_ha-235879-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 cp ha-235879:/home/docker/cp-test.txt ha-235879-m04:/home/docker/cp-test_ha-235879_ha-235879-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m04 "sudo cat /home/docker/cp-test_ha-235879_ha-235879-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 cp testdata/cp-test.txt ha-235879-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 cp ha-235879-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3820146076/001/cp-test_ha-235879-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 cp ha-235879-m02:/home/docker/cp-test.txt ha-235879:/home/docker/cp-test_ha-235879-m02_ha-235879.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879 "sudo cat /home/docker/cp-test_ha-235879-m02_ha-235879.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 cp ha-235879-m02:/home/docker/cp-test.txt ha-235879-m03:/home/docker/cp-test_ha-235879-m02_ha-235879-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m03 "sudo cat /home/docker/cp-test_ha-235879-m02_ha-235879-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 cp ha-235879-m02:/home/docker/cp-test.txt ha-235879-m04:/home/docker/cp-test_ha-235879-m02_ha-235879-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m04 "sudo cat /home/docker/cp-test_ha-235879-m02_ha-235879-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 cp testdata/cp-test.txt ha-235879-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 cp ha-235879-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3820146076/001/cp-test_ha-235879-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 cp ha-235879-m03:/home/docker/cp-test.txt ha-235879:/home/docker/cp-test_ha-235879-m03_ha-235879.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879 "sudo cat /home/docker/cp-test_ha-235879-m03_ha-235879.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 cp ha-235879-m03:/home/docker/cp-test.txt ha-235879-m02:/home/docker/cp-test_ha-235879-m03_ha-235879-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m02 "sudo cat /home/docker/cp-test_ha-235879-m03_ha-235879-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 cp ha-235879-m03:/home/docker/cp-test.txt ha-235879-m04:/home/docker/cp-test_ha-235879-m03_ha-235879-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m04 "sudo cat /home/docker/cp-test_ha-235879-m03_ha-235879-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 cp testdata/cp-test.txt ha-235879-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 cp ha-235879-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3820146076/001/cp-test_ha-235879-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 cp ha-235879-m04:/home/docker/cp-test.txt ha-235879:/home/docker/cp-test_ha-235879-m04_ha-235879.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879 "sudo cat /home/docker/cp-test_ha-235879-m04_ha-235879.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 cp ha-235879-m04:/home/docker/cp-test.txt ha-235879-m02:/home/docker/cp-test_ha-235879-m04_ha-235879-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m02 "sudo cat /home/docker/cp-test_ha-235879-m04_ha-235879-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 cp ha-235879-m04:/home/docker/cp-test.txt ha-235879-m03:/home/docker/cp-test_ha-235879-m04_ha-235879-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 ssh -n ha-235879-m03 "sudo cat /home/docker/cp-test_ha-235879-m04_ha-235879-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-235879 node stop m02 --alsologtostderr -v 5: (13.029142394s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-235879 status --alsologtostderr -v 5: exit status 7 (724.646564ms)

                                                
                                                
-- stdout --
	ha-235879
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-235879-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-235879-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-235879-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:18:13.698519  560946 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:18:13.698808  560946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:18:13.698817  560946 out.go:374] Setting ErrFile to fd 2...
	I1017 19:18:13.698821  560946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:18:13.699113  560946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:18:13.699306  560946 out.go:368] Setting JSON to false
	I1017 19:18:13.699338  560946 mustload.go:65] Loading cluster: ha-235879
	I1017 19:18:13.699483  560946 notify.go:220] Checking for updates...
	I1017 19:18:13.699802  560946 config.go:182] Loaded profile config "ha-235879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:18:13.699821  560946 status.go:174] checking status of ha-235879 ...
	I1017 19:18:13.700244  560946 cli_runner.go:164] Run: docker container inspect ha-235879 --format={{.State.Status}}
	I1017 19:18:13.720335  560946 status.go:371] ha-235879 host status = "Running" (err=<nil>)
	I1017 19:18:13.720366  560946 host.go:66] Checking if "ha-235879" exists ...
	I1017 19:18:13.720640  560946 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-235879
	I1017 19:18:13.739537  560946 host.go:66] Checking if "ha-235879" exists ...
	I1017 19:18:13.739853  560946 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:18:13.739929  560946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-235879
	I1017 19:18:13.759450  560946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/ha-235879/id_rsa Username:docker}
	I1017 19:18:13.855851  560946 ssh_runner.go:195] Run: systemctl --version
	I1017 19:18:13.863177  560946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:18:13.876849  560946 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:18:13.940661  560946 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-17 19:18:13.929671608 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:18:13.941331  560946 kubeconfig.go:125] found "ha-235879" server: "https://192.168.49.254:8443"
	I1017 19:18:13.941370  560946 api_server.go:166] Checking apiserver status ...
	I1017 19:18:13.941417  560946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:18:13.954380  560946 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1237/cgroup
	W1017 19:18:13.963953  560946 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1237/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:18:13.964035  560946 ssh_runner.go:195] Run: ls
	I1017 19:18:13.968195  560946 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1017 19:18:13.973210  560946 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1017 19:18:13.973241  560946 status.go:463] ha-235879 apiserver status = Running (err=<nil>)
	I1017 19:18:13.973256  560946 status.go:176] ha-235879 status: &{Name:ha-235879 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:18:13.973278  560946 status.go:174] checking status of ha-235879-m02 ...
	I1017 19:18:13.973656  560946 cli_runner.go:164] Run: docker container inspect ha-235879-m02 --format={{.State.Status}}
	I1017 19:18:13.994064  560946 status.go:371] ha-235879-m02 host status = "Stopped" (err=<nil>)
	I1017 19:18:13.994088  560946 status.go:384] host is not running, skipping remaining checks
	I1017 19:18:13.994095  560946 status.go:176] ha-235879-m02 status: &{Name:ha-235879-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:18:13.994120  560946 status.go:174] checking status of ha-235879-m03 ...
	I1017 19:18:13.994417  560946 cli_runner.go:164] Run: docker container inspect ha-235879-m03 --format={{.State.Status}}
	I1017 19:18:14.013267  560946 status.go:371] ha-235879-m03 host status = "Running" (err=<nil>)
	I1017 19:18:14.013298  560946 host.go:66] Checking if "ha-235879-m03" exists ...
	I1017 19:18:14.013591  560946 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-235879-m03
	I1017 19:18:14.034601  560946 host.go:66] Checking if "ha-235879-m03" exists ...
	I1017 19:18:14.034975  560946 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:18:14.035031  560946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-235879-m03
	I1017 19:18:14.054717  560946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/ha-235879-m03/id_rsa Username:docker}
	I1017 19:18:14.152593  560946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:18:14.166315  560946 kubeconfig.go:125] found "ha-235879" server: "https://192.168.49.254:8443"
	I1017 19:18:14.166343  560946 api_server.go:166] Checking apiserver status ...
	I1017 19:18:14.166383  560946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:18:14.178051  560946 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1171/cgroup
	W1017 19:18:14.187195  560946 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1171/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:18:14.187248  560946 ssh_runner.go:195] Run: ls
	I1017 19:18:14.191275  560946 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1017 19:18:14.195700  560946 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1017 19:18:14.195736  560946 status.go:463] ha-235879-m03 apiserver status = Running (err=<nil>)
	I1017 19:18:14.195748  560946 status.go:176] ha-235879-m03 status: &{Name:ha-235879-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:18:14.195768  560946 status.go:174] checking status of ha-235879-m04 ...
	I1017 19:18:14.196020  560946 cli_runner.go:164] Run: docker container inspect ha-235879-m04 --format={{.State.Status}}
	I1017 19:18:14.214913  560946 status.go:371] ha-235879-m04 host status = "Running" (err=<nil>)
	I1017 19:18:14.214944  560946 host.go:66] Checking if "ha-235879-m04" exists ...
	I1017 19:18:14.215277  560946 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-235879-m04
	I1017 19:18:14.234206  560946 host.go:66] Checking if "ha-235879-m04" exists ...
	I1017 19:18:14.234565  560946 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:18:14.234619  560946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-235879-m04
	I1017 19:18:14.253329  560946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/ha-235879-m04/id_rsa Username:docker}
	I1017 19:18:14.349754  560946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:18:14.366046  560946 status.go:176] ha-235879-m04 status: &{Name:ha-235879-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-235879 node start m02 --alsologtostderr -v 5: (13.629923006s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (107.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 stop --alsologtostderr -v 5
E1017 19:19:09.662832  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-235879 stop --alsologtostderr -v 5: (49.23032579s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 start --wait true --alsologtostderr -v 5
E1017 19:19:53.360914  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:19:53.367342  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:19:53.378835  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:19:53.400295  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:19:53.441747  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:19:53.523230  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:19:53.684804  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:19:54.006524  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:19:54.648917  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:19:55.930990  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:19:58.492587  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:20:03.614925  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:20:13.857127  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-235879 start --wait true --alsologtostderr -v 5: (57.697313363s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (107.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-235879 node delete m03 --alsologtostderr -v 5: (9.816709689s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (41.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 stop --alsologtostderr -v 5
E1017 19:20:32.728872  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:20:34.339056  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-235879 stop --alsologtostderr -v 5: (41.695260958s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-235879 status --alsologtostderr -v 5: exit status 7 (115.986721ms)

                                                
                                                
-- stdout --
	ha-235879
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-235879-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-235879-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:21:10.851290  574950 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:21:10.851576  574950 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:21:10.851588  574950 out.go:374] Setting ErrFile to fd 2...
	I1017 19:21:10.851594  574950 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:21:10.851940  574950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:21:10.852151  574950 out.go:368] Setting JSON to false
	I1017 19:21:10.852256  574950 mustload.go:65] Loading cluster: ha-235879
	I1017 19:21:10.852366  574950 notify.go:220] Checking for updates...
	I1017 19:21:10.852770  574950 config.go:182] Loaded profile config "ha-235879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:21:10.852790  574950 status.go:174] checking status of ha-235879 ...
	I1017 19:21:10.853295  574950 cli_runner.go:164] Run: docker container inspect ha-235879 --format={{.State.Status}}
	I1017 19:21:10.873417  574950 status.go:371] ha-235879 host status = "Stopped" (err=<nil>)
	I1017 19:21:10.873444  574950 status.go:384] host is not running, skipping remaining checks
	I1017 19:21:10.873456  574950 status.go:176] ha-235879 status: &{Name:ha-235879 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:21:10.873481  574950 status.go:174] checking status of ha-235879-m02 ...
	I1017 19:21:10.873901  574950 cli_runner.go:164] Run: docker container inspect ha-235879-m02 --format={{.State.Status}}
	I1017 19:21:10.893375  574950 status.go:371] ha-235879-m02 host status = "Stopped" (err=<nil>)
	I1017 19:21:10.893400  574950 status.go:384] host is not running, skipping remaining checks
	I1017 19:21:10.893406  574950 status.go:176] ha-235879-m02 status: &{Name:ha-235879-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:21:10.893432  574950 status.go:174] checking status of ha-235879-m04 ...
	I1017 19:21:10.893796  574950 cli_runner.go:164] Run: docker container inspect ha-235879-m04 --format={{.State.Status}}
	I1017 19:21:10.912105  574950 status.go:371] ha-235879-m04 host status = "Stopped" (err=<nil>)
	I1017 19:21:10.912155  574950 status.go:384] host is not running, skipping remaining checks
	I1017 19:21:10.912164  574950 status.go:176] ha-235879-m04 status: &{Name:ha-235879-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (41.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (56.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1017 19:21:15.300759  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-235879 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (56.007059584s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (56.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (50.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 node add --control-plane --alsologtostderr -v 5
E1017 19:22:37.222653  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-235879 node add --control-plane --alsologtostderr -v 5: (49.727852583s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-235879 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (50.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.09s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-581322 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-581322 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (40.090392822s)
--- PASS: TestJSONOutput/start/Command (40.09s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.07s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-581322 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-581322 --output=json --user=testUser: (6.065868604s)
--- PASS: TestJSONOutput/stop/Command (6.07s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-543464 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-543464 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (73.925282ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4fc06e08-ce15-495d-9f90-ccb32a9d5d56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-543464] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cc1e1874-ff49-4892-b46f-9450a130fc40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21753"}}
	{"specversion":"1.0","id":"4ec1fbe2-4d84-4922-b41d-19fac05a584d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5775508c-5590-4a07-9717-cfa34ab4f451","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig"}}
	{"specversion":"1.0","id":"d23f7915-7c6e-4a5f-a7df-e3ca134a806b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube"}}
	{"specversion":"1.0","id":"7a6d921e-a72a-476b-bd3e-7a99a8a331b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"980cd660-991a-45dc-a768-4f856243a9a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"735b2b3f-5cb2-4c1f-a6dd-f5eac768cd2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-543464" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-543464
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.5s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-349536 --network=
E1017 19:24:09.667819  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-349536 --network=: (25.265367655s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-349536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-349536
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-349536: (2.214775404s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.50s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.98s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-929173 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-929173 --network=bridge: (21.935101748s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-929173" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-929173
E1017 19:24:53.360870  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-929173: (2.020425891s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.98s)

                                                
                                    
x
+
TestKicExistingNetwork (27.77s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1017 19:24:54.615525  495725 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1017 19:24:54.634013  495725 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1017 19:24:54.634078  495725 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1017 19:24:54.634095  495725 cli_runner.go:164] Run: docker network inspect existing-network
W1017 19:24:54.651969  495725 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1017 19:24:54.651998  495725 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1017 19:24:54.652012  495725 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1017 19:24:54.652147  495725 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1017 19:24:54.669695  495725 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-730d915fa684 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:e2:02:cd:78:1c:78} reservation:<nil>}
I1017 19:24:54.670159  495725 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a70}
I1017 19:24:54.670186  495725 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1017 19:24:54.670243  495725 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1017 19:24:54.727239  495725 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-260111 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-260111 --network=existing-network: (25.579972673s)
helpers_test.go:175: Cleaning up "existing-network-260111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-260111
E1017 19:25:21.064943  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-260111: (2.041394271s)
I1017 19:25:22.366395  495725 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (27.77s)

                                                
                                    
x
+
TestKicCustomSubnet (23.97s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-081422 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-081422 --subnet=192.168.60.0/24: (21.768207658s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-081422 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-081422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-081422
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-081422: (2.181223438s)
--- PASS: TestKicCustomSubnet (23.97s)

                                                
                                    
x
+
TestKicStaticIP (25.49s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-806576 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-806576 --static-ip=192.168.200.200: (23.190793042s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-806576 ip
helpers_test.go:175: Cleaning up "static-ip-806576" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-806576
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-806576: (2.16136046s)
--- PASS: TestKicStaticIP (25.49s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (50.03s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-215550 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-215550 --driver=docker  --container-runtime=crio: (20.809789282s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-217576 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-217576 --driver=docker  --container-runtime=crio: (23.156857772s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-215550
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-217576
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-217576" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-217576
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-217576: (2.415011883s)
helpers_test.go:175: Cleaning up "first-215550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-215550
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-215550: (2.414844497s)
--- PASS: TestMinikubeProfile (50.03s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-091247 --memory=3072 --mount-string /tmp/TestMountStartserial1925028885/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-091247 --memory=3072 --mount-string /tmp/TestMountStartserial1925028885/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.158193694s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-091247 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-110960 --memory=3072 --mount-string /tmp/TestMountStartserial1925028885/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-110960 --memory=3072 --mount-string /tmp/TestMountStartserial1925028885/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.738879527s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-110960 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-091247 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-091247 --alsologtostderr -v=5: (1.726249885s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-110960 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-110960
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-110960: (1.257290877s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.1s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-110960
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-110960: (6.101746276s)
--- PASS: TestMountStart/serial/RestartStopped (7.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-110960 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (68.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-519393 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-519393 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m8.469301686s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (68.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519393 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519393 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-519393 -- rollout status deployment/busybox: (1.961408947s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519393 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519393 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519393 -- exec busybox-7b57f96db7-5fczx -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519393 -- exec busybox-7b57f96db7-vvpk8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519393 -- exec busybox-7b57f96db7-5fczx -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519393 -- exec busybox-7b57f96db7-vvpk8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519393 -- exec busybox-7b57f96db7-5fczx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519393 -- exec busybox-7b57f96db7-vvpk8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.38s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519393 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519393 -- exec busybox-7b57f96db7-5fczx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519393 -- exec busybox-7b57f96db7-5fczx -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519393 -- exec busybox-7b57f96db7-vvpk8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519393 -- exec busybox-7b57f96db7-vvpk8 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-519393 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-519393 -v=5 --alsologtostderr: (24.003554788s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (24.64s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-519393 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 cp testdata/cp-test.txt multinode-519393:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 ssh -n multinode-519393 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 cp multinode-519393:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3494145719/001/cp-test_multinode-519393.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 ssh -n multinode-519393 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 cp multinode-519393:/home/docker/cp-test.txt multinode-519393-m02:/home/docker/cp-test_multinode-519393_multinode-519393-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 ssh -n multinode-519393 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 ssh -n multinode-519393-m02 "sudo cat /home/docker/cp-test_multinode-519393_multinode-519393-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 cp multinode-519393:/home/docker/cp-test.txt multinode-519393-m03:/home/docker/cp-test_multinode-519393_multinode-519393-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 ssh -n multinode-519393 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 ssh -n multinode-519393-m03 "sudo cat /home/docker/cp-test_multinode-519393_multinode-519393-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 cp testdata/cp-test.txt multinode-519393-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 ssh -n multinode-519393-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 cp multinode-519393-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3494145719/001/cp-test_multinode-519393-m02.txt
E1017 19:29:09.662432  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 ssh -n multinode-519393-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 cp multinode-519393-m02:/home/docker/cp-test.txt multinode-519393:/home/docker/cp-test_multinode-519393-m02_multinode-519393.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 ssh -n multinode-519393-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 ssh -n multinode-519393 "sudo cat /home/docker/cp-test_multinode-519393-m02_multinode-519393.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 cp multinode-519393-m02:/home/docker/cp-test.txt multinode-519393-m03:/home/docker/cp-test_multinode-519393-m02_multinode-519393-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 ssh -n multinode-519393-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 ssh -n multinode-519393-m03 "sudo cat /home/docker/cp-test_multinode-519393-m02_multinode-519393-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 cp testdata/cp-test.txt multinode-519393-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 ssh -n multinode-519393-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 cp multinode-519393-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3494145719/001/cp-test_multinode-519393-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 ssh -n multinode-519393-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 cp multinode-519393-m03:/home/docker/cp-test.txt multinode-519393:/home/docker/cp-test_multinode-519393-m03_multinode-519393.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 ssh -n multinode-519393-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 ssh -n multinode-519393 "sudo cat /home/docker/cp-test_multinode-519393-m03_multinode-519393.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 cp multinode-519393-m03:/home/docker/cp-test.txt multinode-519393-m02:/home/docker/cp-test_multinode-519393-m03_multinode-519393-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 ssh -n multinode-519393-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 ssh -n multinode-519393-m02 "sudo cat /home/docker/cp-test_multinode-519393-m03_multinode-519393-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.79s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-519393 node stop m03: (1.270975549s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-519393 status: exit status 7 (499.764244ms)

                                                
                                                
-- stdout --
	multinode-519393
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-519393-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-519393-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-519393 status --alsologtostderr: exit status 7 (500.645906ms)

                                                
                                                
-- stdout --
	multinode-519393
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-519393-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-519393-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:29:16.883462  634538 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:29:16.883601  634538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:29:16.883614  634538 out.go:374] Setting ErrFile to fd 2...
	I1017 19:29:16.883620  634538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:29:16.883851  634538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:29:16.884052  634538 out.go:368] Setting JSON to false
	I1017 19:29:16.884084  634538 mustload.go:65] Loading cluster: multinode-519393
	I1017 19:29:16.884210  634538 notify.go:220] Checking for updates...
	I1017 19:29:16.884622  634538 config.go:182] Loaded profile config "multinode-519393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:29:16.884646  634538 status.go:174] checking status of multinode-519393 ...
	I1017 19:29:16.885236  634538 cli_runner.go:164] Run: docker container inspect multinode-519393 --format={{.State.Status}}
	I1017 19:29:16.903045  634538 status.go:371] multinode-519393 host status = "Running" (err=<nil>)
	I1017 19:29:16.903087  634538 host.go:66] Checking if "multinode-519393" exists ...
	I1017 19:29:16.903393  634538 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-519393
	I1017 19:29:16.922434  634538 host.go:66] Checking if "multinode-519393" exists ...
	I1017 19:29:16.922735  634538 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:29:16.922793  634538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-519393
	I1017 19:29:16.941150  634538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/multinode-519393/id_rsa Username:docker}
	I1017 19:29:17.036584  634538 ssh_runner.go:195] Run: systemctl --version
	I1017 19:29:17.043162  634538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:29:17.055920  634538 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:29:17.119180  634538 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-17 19:29:17.109301945 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:29:17.119851  634538 kubeconfig.go:125] found "multinode-519393" server: "https://192.168.67.2:8443"
	I1017 19:29:17.119888  634538 api_server.go:166] Checking apiserver status ...
	I1017 19:29:17.119941  634538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:17.131804  634538 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1263/cgroup
	W1017 19:29:17.140455  634538 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1263/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:29:17.140502  634538 ssh_runner.go:195] Run: ls
	I1017 19:29:17.144416  634538 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1017 19:29:17.149377  634538 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1017 19:29:17.149401  634538 status.go:463] multinode-519393 apiserver status = Running (err=<nil>)
	I1017 19:29:17.149411  634538 status.go:176] multinode-519393 status: &{Name:multinode-519393 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:29:17.149429  634538 status.go:174] checking status of multinode-519393-m02 ...
	I1017 19:29:17.149677  634538 cli_runner.go:164] Run: docker container inspect multinode-519393-m02 --format={{.State.Status}}
	I1017 19:29:17.167677  634538 status.go:371] multinode-519393-m02 host status = "Running" (err=<nil>)
	I1017 19:29:17.167729  634538 host.go:66] Checking if "multinode-519393-m02" exists ...
	I1017 19:29:17.168027  634538 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-519393-m02
	I1017 19:29:17.186445  634538 host.go:66] Checking if "multinode-519393-m02" exists ...
	I1017 19:29:17.186757  634538 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:29:17.186805  634538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-519393-m02
	I1017 19:29:17.204654  634538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/21753-492109/.minikube/machines/multinode-519393-m02/id_rsa Username:docker}
	I1017 19:29:17.299552  634538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:29:17.313047  634538 status.go:176] multinode-519393-m02 status: &{Name:multinode-519393-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:29:17.313102  634538 status.go:174] checking status of multinode-519393-m03 ...
	I1017 19:29:17.313377  634538 cli_runner.go:164] Run: docker container inspect multinode-519393-m03 --format={{.State.Status}}
	I1017 19:29:17.331285  634538 status.go:371] multinode-519393-m03 host status = "Stopped" (err=<nil>)
	I1017 19:29:17.331311  634538 status.go:384] host is not running, skipping remaining checks
	I1017 19:29:17.331320  634538 status.go:176] multinode-519393-m03 status: &{Name:multinode-519393-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-519393 node start m03 -v=5 --alsologtostderr: (6.652548769s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-519393
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-519393
E1017 19:29:53.363213  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-519393: (31.430807636s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-519393 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-519393 --wait=true -v=5 --alsologtostderr: (46.764543651s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-519393
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-519393 node delete m03: (4.707228242s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.36s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-519393 stop: (28.422562076s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-519393 status: exit status 7 (93.627905ms)

                                                
                                                
-- stdout --
	multinode-519393
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-519393-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-519393 status --alsologtostderr: exit status 7 (95.501591ms)

                                                
                                                
-- stdout --
	multinode-519393
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-519393-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:31:16.920803  644297 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:31:16.921095  644297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:31:16.921104  644297 out.go:374] Setting ErrFile to fd 2...
	I1017 19:31:16.921108  644297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:31:16.921296  644297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:31:16.921477  644297 out.go:368] Setting JSON to false
	I1017 19:31:16.921509  644297 mustload.go:65] Loading cluster: multinode-519393
	I1017 19:31:16.921593  644297 notify.go:220] Checking for updates...
	I1017 19:31:16.921944  644297 config.go:182] Loaded profile config "multinode-519393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:31:16.921962  644297 status.go:174] checking status of multinode-519393 ...
	I1017 19:31:16.922494  644297 cli_runner.go:164] Run: docker container inspect multinode-519393 --format={{.State.Status}}
	I1017 19:31:16.944279  644297 status.go:371] multinode-519393 host status = "Stopped" (err=<nil>)
	I1017 19:31:16.944323  644297 status.go:384] host is not running, skipping remaining checks
	I1017 19:31:16.944338  644297 status.go:176] multinode-519393 status: &{Name:multinode-519393 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:31:16.944389  644297 status.go:174] checking status of multinode-519393-m02 ...
	I1017 19:31:16.944679  644297 cli_runner.go:164] Run: docker container inspect multinode-519393-m02 --format={{.State.Status}}
	I1017 19:31:16.963219  644297 status.go:371] multinode-519393-m02 host status = "Stopped" (err=<nil>)
	I1017 19:31:16.963269  644297 status.go:384] host is not running, skipping remaining checks
	I1017 19:31:16.963281  644297 status.go:176] multinode-519393-m02 status: &{Name:multinode-519393-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-519393 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-519393 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (51.024161835s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519393 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.64s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-519393
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-519393-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-519393-m02 --driver=docker  --container-runtime=crio: exit status 14 (72.66607ms)

                                                
                                                
-- stdout --
	* [multinode-519393-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-519393-m02' is duplicated with machine name 'multinode-519393-m02' in profile 'multinode-519393'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-519393-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-519393-m03 --driver=docker  --container-runtime=crio: (23.694465109s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-519393
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-519393: exit status 80 (296.000622ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-519393 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-519393-m03 already exists in multinode-519393-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-519393-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-519393-m03: (2.40916799s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.53s)

                                                
                                    
x
+
TestPreload (105.17s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-589384 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-589384 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (47.763450381s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-589384 image pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-589384
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-589384: (5.885770503s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-589384 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1017 19:34:09.663120  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-589384 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (47.856126344s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-589384 image list
helpers_test.go:175: Cleaning up "test-preload-589384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-589384
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-589384: (2.471447558s)
--- PASS: TestPreload (105.17s)

                                                
                                    
x
+
TestScheduledStopUnix (98.44s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-144766 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-144766 --memory=3072 --driver=docker  --container-runtime=crio: (21.833375133s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-144766 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-144766 -n scheduled-stop-144766
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-144766 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1017 19:34:46.843570  495725 retry.go:31] will retry after 113.682µs: open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/scheduled-stop-144766/pid: no such file or directory
I1017 19:34:46.844770  495725 retry.go:31] will retry after 224.738µs: open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/scheduled-stop-144766/pid: no such file or directory
I1017 19:34:46.845920  495725 retry.go:31] will retry after 159.051µs: open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/scheduled-stop-144766/pid: no such file or directory
I1017 19:34:46.847097  495725 retry.go:31] will retry after 426.359µs: open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/scheduled-stop-144766/pid: no such file or directory
I1017 19:34:46.848235  495725 retry.go:31] will retry after 309.704µs: open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/scheduled-stop-144766/pid: no such file or directory
I1017 19:34:46.849356  495725 retry.go:31] will retry after 394.428µs: open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/scheduled-stop-144766/pid: no such file or directory
I1017 19:34:46.850501  495725 retry.go:31] will retry after 995.875µs: open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/scheduled-stop-144766/pid: no such file or directory
I1017 19:34:46.851636  495725 retry.go:31] will retry after 2.274335ms: open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/scheduled-stop-144766/pid: no such file or directory
I1017 19:34:46.854821  495725 retry.go:31] will retry after 3.459421ms: open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/scheduled-stop-144766/pid: no such file or directory
I1017 19:34:46.859038  495725 retry.go:31] will retry after 4.51889ms: open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/scheduled-stop-144766/pid: no such file or directory
I1017 19:34:46.864255  495725 retry.go:31] will retry after 8.170491ms: open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/scheduled-stop-144766/pid: no such file or directory
I1017 19:34:46.874066  495725 retry.go:31] will retry after 12.136844ms: open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/scheduled-stop-144766/pid: no such file or directory
I1017 19:34:46.887344  495725 retry.go:31] will retry after 15.221656ms: open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/scheduled-stop-144766/pid: no such file or directory
I1017 19:34:46.903592  495725 retry.go:31] will retry after 23.018662ms: open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/scheduled-stop-144766/pid: no such file or directory
I1017 19:34:46.926811  495725 retry.go:31] will retry after 40.370851ms: open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/scheduled-stop-144766/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-144766 --cancel-scheduled
E1017 19:34:53.360178  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-144766 -n scheduled-stop-144766
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-144766
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-144766 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-144766
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-144766: exit status 7 (75.51895ms)

                                                
                                                
-- stdout --
	scheduled-stop-144766
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-144766 -n scheduled-stop-144766
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-144766 -n scheduled-stop-144766: exit status 7 (72.480304ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-144766" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-144766
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-144766: (5.145547921s)
--- PASS: TestScheduledStopUnix (98.44s)

                                                
                                    
x
+
TestInsufficientStorage (9.83s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-379717 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-379717 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.308534395s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"73e9950c-04e9-464a-86c6-2bc848756f6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-379717] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"378ba39c-8319-4df5-9110-b88d0ca733ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21753"}}
	{"specversion":"1.0","id":"7d88d4c9-b02a-40de-b653-e191ff681221","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a42bf716-319a-4b8c-8794-f11fdc937a4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig"}}
	{"specversion":"1.0","id":"b6dca3c9-9db6-4388-b926-6e9906c86695","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube"}}
	{"specversion":"1.0","id":"3822b4ac-b803-4cf5-9368-439f14b2df31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"cc51379d-30f6-470c-972b-e4193ca9afc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"aa69833c-1267-4029-8dda-f86007360250","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"7772dff0-441e-47ff-831a-fcfa3ea4c3dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"dc011aef-8260-4864-b962-199fd42f7b6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e52295d0-5c3c-4743-9902-d37e0e70bba8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"73b81026-9ee6-4f2d-9395-91599544731d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-379717\" primary control-plane node in \"insufficient-storage-379717\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"779ff075-4b12-420b-b6b6-3d96ed1e11ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760609789-21757 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"fea7019d-4c43-4439-a014-77fd50817889","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"72708908-40a8-4ec8-b09c-2797dc7a5ac3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-379717 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-379717 --output=json --layout=cluster: exit status 7 (286.967871ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-379717","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-379717","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1017 19:36:10.590943  664678 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-379717" does not appear in /home/jenkins/minikube-integration/21753-492109/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-379717 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-379717 --output=json --layout=cluster: exit status 7 (289.759063ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-379717","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-379717","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1017 19:36:10.881930  664789 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-379717" does not appear in /home/jenkins/minikube-integration/21753-492109/kubeconfig
	E1017 19:36:10.892465  664789 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/insufficient-storage-379717/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-379717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-379717
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-379717: (1.94664762s)
--- PASS: TestInsufficientStorage (9.83s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (47.22s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.490458738 start -p running-upgrade-329131 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.490458738 start -p running-upgrade-329131 --memory=3072 --vm-driver=docker  --container-runtime=crio: (22.690914163s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-329131 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-329131 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.559297695s)
helpers_test.go:175: Cleaning up "running-upgrade-329131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-329131
E1017 19:39:09.662827  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-329131: (2.453163866s)
--- PASS: TestRunningBinaryUpgrade (47.22s)

                                                
                                    
x
+
TestKubernetesUpgrade (301.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-137244 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-137244 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.32065201s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-137244
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-137244: (1.814457965s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-137244 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-137244 status --format={{.Host}}: exit status 7 (71.478732ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-137244 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-137244 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.732466453s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-137244 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-137244 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-137244 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (84.506314ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-137244] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-137244
	    minikube start -p kubernetes-upgrade-137244 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1372442 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-137244 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-137244 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-137244 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.963028475s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-137244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-137244
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-137244: (2.559545507s)
--- PASS: TestKubernetesUpgrade (301.61s)

                                                
                                    
x
+
TestMissingContainerUpgrade (79.33s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2806606377 start -p missing-upgrade-447546 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2806606377 start -p missing-upgrade-447546 --memory=3072 --driver=docker  --container-runtime=crio: (27.934916894s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-447546
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-447546: (12.917160847s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-447546
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-447546 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-447546 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.590098569s)
helpers_test.go:175: Cleaning up "missing-upgrade-447546" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-447546
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-447546: (2.413001673s)
--- PASS: TestMissingContainerUpgrade (79.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-573158 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-573158 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (93.725357ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-573158] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (32.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-573158 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1017 19:36:16.427855  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-573158 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.827925152s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-573158 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (32.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (28.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-573158 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-573158 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.353485683s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-573158 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-573158 status -o json: exit status 2 (347.631246ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-573158","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-573158
E1017 19:37:12.730547  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-573158: (2.554894324s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (28.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-573158 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-573158 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.144373283s)
--- PASS: TestNoKubernetes/serial/Start (5.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (62.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.164275723 start -p stopped-upgrade-495327 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.164275723 start -p stopped-upgrade-495327 --memory=3072 --vm-driver=docker  --container-runtime=crio: (43.46287s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.164275723 -p stopped-upgrade-495327 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.164275723 -p stopped-upgrade-495327 stop: (2.462496964s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-495327 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-495327 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (16.171255572s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (62.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-573158 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-573158 "sudo systemctl is-active --quiet service kubelet": exit status 1 (343.03114ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (4.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-573158
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-573158: (4.546075252s)
--- PASS: TestNoKubernetes/serial/Stop (4.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-573158 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-573158 --driver=docker  --container-runtime=crio: (8.931756858s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-573158 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-573158 "sudo systemctl is-active --quiet service kubelet": exit status 1 (289.574764ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-495327
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-495327: (1.016076799s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                    
x
+
TestPause/serial/Start (38.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-022753 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-022753 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (38.651750772s)
--- PASS: TestPause/serial/Start (38.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-448344 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-448344 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (186.164981ms)

                                                
                                                
-- stdout --
	* [false-448344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:39:14.057996  711535 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:39:14.058432  711535 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:39:14.058445  711535 out.go:374] Setting ErrFile to fd 2...
	I1017 19:39:14.058452  711535 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:39:14.059050  711535 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-492109/.minikube/bin
	I1017 19:39:14.059890  711535 out.go:368] Setting JSON to false
	I1017 19:39:14.061767  711535 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12093,"bootTime":1760717861,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:39:14.061893  711535 start.go:141] virtualization: kvm guest
	I1017 19:39:14.063542  711535 out.go:179] * [false-448344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:39:14.065231  711535 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:39:14.065229  711535 notify.go:220] Checking for updates...
	I1017 19:39:14.066498  711535 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:39:14.067741  711535 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-492109/kubeconfig
	I1017 19:39:14.069075  711535 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-492109/.minikube
	I1017 19:39:14.070338  711535 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:39:14.071587  711535 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:39:14.073362  711535 config.go:182] Loaded profile config "cert-expiration-141205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:39:14.073553  711535 config.go:182] Loaded profile config "kubernetes-upgrade-137244": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:39:14.073711  711535 config.go:182] Loaded profile config "pause-022753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:39:14.073837  711535 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:39:14.104656  711535 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:39:14.104842  711535 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:39:14.176176  711535 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 19:39:14.16350137 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652154368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:39:14.176336  711535 docker.go:318] overlay module found
	I1017 19:39:14.178177  711535 out.go:179] * Using the docker driver based on user configuration
	I1017 19:39:14.179333  711535 start.go:305] selected driver: docker
	I1017 19:39:14.179360  711535 start.go:925] validating driver "docker" against <nil>
	I1017 19:39:14.179377  711535 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:39:14.181350  711535 out.go:203] 
	W1017 19:39:14.182456  711535 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1017 19:39:14.183854  711535 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-448344 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-448344

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-448344

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-448344

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-448344

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-448344

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-448344

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-448344

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-448344

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-448344

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-448344

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-448344

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-448344" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-448344" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 17 Oct 2025 19:37:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-141205
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 17 Oct 2025 19:38:18 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-137244
contexts:
- context:
cluster: cert-expiration-141205
extensions:
- extension:
last-update: Fri, 17 Oct 2025 19:37:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-141205
name: cert-expiration-141205
- context:
cluster: kubernetes-upgrade-137244
user: kubernetes-upgrade-137244
name: kubernetes-upgrade-137244
current-context: ""
kind: Config
users:
- name: cert-expiration-141205
user:
client-certificate: /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/cert-expiration-141205/client.crt
client-key: /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/cert-expiration-141205/client.key
- name: kubernetes-upgrade-137244
user:
client-certificate: /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/kubernetes-upgrade-137244/client.crt
client-key: /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/kubernetes-upgrade-137244/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-448344

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448344"

                                                
                                                
----------------------- debugLogs end: false-448344 [took: 3.187956341s] --------------------------------
helpers_test.go:175: Cleaning up "false-448344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-448344
--- PASS: TestNetworkPlugins/group/false (3.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (49.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-907112 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-907112 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.035135193s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (49.04s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.2s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-022753 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-022753 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.189223648s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (52.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-171807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1017 19:39:53.360195  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-171807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (52.159629526s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (52.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-907112 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0c75288d-bccd-48cb-8395-3ac83448ebf7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0c75288d-bccd-48cb-8395-3ac83448ebf7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003274798s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-907112 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (41.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-599709 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-599709 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (41.202882562s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (41.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-907112 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-907112 --alsologtostderr -v=3: (16.058296825s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-907112 -n old-k8s-version-907112
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-907112 -n old-k8s-version-907112: exit status 7 (98.037435ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-907112 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-907112 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-907112 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (44.112429693s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-907112 -n old-k8s-version-907112
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-171807 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [22292e6f-a57f-4f4c-baa0-b41b8ee6e47b] Pending
helpers_test.go:352: "busybox" [22292e6f-a57f-4f4c-baa0-b41b8ee6e47b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [22292e6f-a57f-4f4c-baa0-b41b8ee6e47b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.009168891s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-171807 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-171807 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-171807 --alsologtostderr -v=3: (16.654053005s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-599709 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [970734d3-e268-47f0-9b00-efa6c26f8740] Pending
helpers_test.go:352: "busybox" [970734d3-e268-47f0-9b00-efa6c26f8740] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [970734d3-e268-47f0-9b00-efa6c26f8740] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004289488s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-599709 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-171807 -n no-preload-171807
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-171807 -n no-preload-171807: exit status 7 (79.114011ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-171807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (45.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-171807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-171807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.364637673s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-171807 -n no-preload-171807
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (45.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-599709 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-599709 --alsologtostderr -v=3: (16.553592667s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-lh28q" [d975038c-cb8d-4021-9882-0dd6334eb118] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004110679s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-599709 -n embed-certs-599709
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-599709 -n embed-certs-599709: exit status 7 (73.888354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-599709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-599709 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-599709 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (52.215657659s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-599709 -n embed-certs-599709
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-lh28q" [d975038c-cb8d-4021-9882-0dd6334eb118] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003784884s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-907112 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-907112 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-112878 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-112878 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (42.204319049s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4kqlp" [f0c040e7-b223-41cb-8099-0a58dfcd4632] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003526609s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4kqlp" [f0c040e7-b223-41cb-8099-0a58dfcd4632] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004737044s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-171807 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-171807 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (29.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-438547 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-438547 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (29.682266468s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (29.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mh7df" [548ef298-e15a-4b09-831b-288b15fb3a90] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004378288s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-112878 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a8098647-3058-4af7-ab8b-7ecb428988e6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a8098647-3058-4af7-ab8b-7ecb428988e6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004266055s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-112878 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mh7df" [548ef298-e15a-4b09-831b-288b15fb3a90] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003881827s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-599709 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-599709 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-112878 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-112878 --alsologtostderr -v=3: (18.470700293s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-448344 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-448344 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (43.326266114s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (62.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-448344 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-448344 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m2.585757331s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (62.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (18.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-438547 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-438547 --alsologtostderr -v=3: (18.064959983s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (18.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-112878 -n default-k8s-diff-port-112878
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-112878 -n default-k8s-diff-port-112878: exit status 7 (96.998689ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-112878 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (44.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-112878 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-112878 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (44.4246797s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-112878 -n default-k8s-diff-port-112878
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (44.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-438547 -n newest-cni-438547
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-438547 -n newest-cni-438547: exit status 7 (126.018801ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-438547 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-438547 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-438547 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (12.107892943s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-438547 -n newest-cni-438547
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-438547 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-448344 "pgrep -a kubelet"
I1017 19:43:25.410137  495725 config.go:182] Loaded profile config "auto-448344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-448344 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b9hb8" [0824a8b9-e489-47f8-b4c0-ed823b8b8902] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b9hb8" [0824a8b9-e489-47f8-b4c0-ed823b8b8902] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004685345s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-448344 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-448344 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (53.957413689s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-448344 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-448344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-448344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hlrh4" [faf76b76-7636-40eb-98aa-e9ef5eb101bc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004891663s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-448344 "pgrep -a kubelet"
I1017 19:43:45.779623  495725 config.go:182] Loaded profile config "enable-default-cni-448344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-448344 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kk7pd" [fddc56bc-4a3e-4aa6-96e1-016d16d731d7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kk7pd" [fddc56bc-4a3e-4aa6-96e1-016d16d731d7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004793183s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hlrh4" [faf76b76-7636-40eb-98aa-e9ef5eb101bc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003915294s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-112878 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-112878 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-448344 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-448344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (51.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-448344 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-448344 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (51.718770105s)
--- PASS: TestNetworkPlugins/group/calico/Start (51.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-448344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (37.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-448344 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1017 19:44:09.662952  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/addons-642189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-448344 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (37.079623894s)
--- PASS: TestNetworkPlugins/group/bridge/Start (37.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-448344 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-448344 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (54.673602767s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-r98zm" [77edaac5-81b9-4d7f-aeeb-7c9a6f127d6a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004650828s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-448344 "pgrep -a kubelet"
I1017 19:44:28.632019  495725 config.go:182] Loaded profile config "flannel-448344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-448344 replace --force -f testdata/netcat-deployment.yaml
I1017 19:44:29.059129  495725 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1017 19:44:29.223973  495725 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4gpwb" [52a6a448-b468-4752-ad8b-0c29649e7943] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4gpwb" [52a6a448-b468-4752-ad8b-0c29649e7943] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003790081s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-448344 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-448344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-448344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-448344 "pgrep -a kubelet"
I1017 19:44:41.615622  495725 config.go:182] Loaded profile config "bridge-448344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-448344 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bnrsj" [dce33e49-11f3-4dfe-8313-4074ce8fa171] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bnrsj" [dce33e49-11f3-4dfe-8313-4074ce8fa171] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.005864578s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-zq72q" [90f08ff8-10cc-41d8-8fc5-9ef4ad24542c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005132782s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-448344 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-448344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-448344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-448344 "pgrep -a kubelet"
I1017 19:44:52.307326  495725 config.go:182] Loaded profile config "calico-448344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-448344 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-chqcb" [b22d3397-3d75-4f29-afc9-73508897ddbb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1017 19:44:53.361023  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/functional-397448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-chqcb" [b22d3397-3d75-4f29-afc9-73508897ddbb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004701456s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-448344 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-448344 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (41.704195556s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-448344 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-448344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-448344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-448344 "pgrep -a kubelet"
I1017 19:45:10.762114  495725 config.go:182] Loaded profile config "custom-flannel-448344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-448344 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7767q" [25805005-94f7-434f-b830-5bc3b16a532a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7767q" [25805005-94f7-434f-b830-5bc3b16a532a] Running
E1017 19:45:16.710972  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/old-k8s-version-907112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.00442368s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-448344 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-448344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-448344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-t8dsm" [ce026cba-b146-4ade-84b8-7e233bc4ffc8] Running
E1017 19:45:42.023532  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/no-preload-171807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:45:43.304895  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/no-preload-171807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:45:45.866593  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/no-preload-171807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003983289s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-448344 "pgrep -a kubelet"
I1017 19:45:48.108454  495725 config.go:182] Loaded profile config "kindnet-448344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-448344 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ghc5t" [418e0b38-279f-4103-9c2b-d5ed28f6a8f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1017 19:45:50.988539  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/no-preload-171807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-ghc5t" [418e0b38-279f-4103-9c2b-d5ed28f6a8f3] Running
E1017 19:45:52.555762  495725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/old-k8s-version-907112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004078986s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-448344 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-448344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-448344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                    

Test skip (26/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-220565" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-220565
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-448344 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-448344

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-448344

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-448344

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-448344

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-448344

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-448344

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-448344

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-448344

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-448344

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-448344

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-448344

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-448344" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-448344" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 17 Oct 2025 19:37:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-141205
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 17 Oct 2025 19:38:18 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-137244
contexts:
- context:
cluster: cert-expiration-141205
extensions:
- extension:
last-update: Fri, 17 Oct 2025 19:37:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-141205
name: cert-expiration-141205
- context:
cluster: kubernetes-upgrade-137244
user: kubernetes-upgrade-137244
name: kubernetes-upgrade-137244
current-context: ""
kind: Config
users:
- name: cert-expiration-141205
user:
client-certificate: /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/cert-expiration-141205/client.crt
client-key: /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/cert-expiration-141205/client.key
- name: kubernetes-upgrade-137244
user:
client-certificate: /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/kubernetes-upgrade-137244/client.crt
client-key: /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/kubernetes-upgrade-137244/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-448344

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448344"

                                                
                                                
----------------------- debugLogs end: kubenet-448344 [took: 3.285935592s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-448344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-448344
--- SKIP: TestNetworkPlugins/group/kubenet (3.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-448344 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-448344

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-448344

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-448344

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-448344

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-448344

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-448344

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-448344

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-448344

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-448344

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-448344

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-448344

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-448344" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-448344

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-448344

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-448344

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-448344

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-448344" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-448344" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 17 Oct 2025 19:37:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-141205
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21753-492109/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 17 Oct 2025 19:38:18 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-137244
contexts:
- context:
cluster: cert-expiration-141205
extensions:
- extension:
last-update: Fri, 17 Oct 2025 19:37:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-141205
name: cert-expiration-141205
- context:
cluster: kubernetes-upgrade-137244
user: kubernetes-upgrade-137244
name: kubernetes-upgrade-137244
current-context: ""
kind: Config
users:
- name: cert-expiration-141205
user:
client-certificate: /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/cert-expiration-141205/client.crt
client-key: /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/cert-expiration-141205/client.key
- name: kubernetes-upgrade-137244
user:
client-certificate: /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/kubernetes-upgrade-137244/client.crt
client-key: /home/jenkins/minikube-integration/21753-492109/.minikube/profiles/kubernetes-upgrade-137244/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-448344

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-448344" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448344"

                                                
                                                
----------------------- debugLogs end: cilium-448344 [took: 3.555456449s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-448344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-448344
--- SKIP: TestNetworkPlugins/group/cilium (3.76s)

                                                
                                    
Copied to clipboard